Search Results

Search found 4704 results on 189 pages for 'refactoring databases'.

Page 148/189 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • What to do if I am working on a language that I don't like

    - by Sayem Ahmed
    Hi there, I really don't know if this is the right place to ask this question, but if it isn't, then I guess someone will notify. Anyway, I am working in a software development farm which is currently using PowerBuilder to develop a mid-size ERP solution. The work environment and company management are so great that it may be the best in the whole Bangladesh. Only problem is the technology that are currently being used, which is this PowerBuilder. Now I am a guy who tends to prefer modern development technologies, like DI containers, ORM, TDD, JQuery etc. PowerBuilder is a great tool too, but I couldn' like the application techniques used to build PB applications. These techniques are so inheritance-dependent that many a times these create a great deal of sufferings. I remember two days ago I had to change some processing logic in a core user object and as a result I had to test and re-test all the forms that the application have(apparently, there are almost 20 forms there, each of them with 3-4 kinds of functionalities). Also, learning PB is tough, because online material on this thing is very, very low. I can't afford to read all the documentation that PB provide because I have hard deadlines on the work that I have to do. Another thing with PB is that applications tend to rely on business logic that are implemented on databases which causes debugging to be a nightmare. As a result, I don't feel motivated enough to work in this IDE/System/Framework (or whatever) anymore. My productivity has greatly decreased, and I am not delivering quality code. I think I have the following options available to me - Remain in the current job, keep delivering worse code and let my productivity decrease day by day, taking salaries and bonuses but not delivering quality codes/doing my job the way I should, Search for a new job. At this point number 2 seems a good option, but there are also some issues. As I mentioned before, our management may be the best in the country. Our company owner is himself a software developer with 24 years of experience in software development. He is currently our Team Leader and System Analyst. He is by far the greatest manager and boss I have ever seen. He understands developer's mentality very well(as he IS himself a developer). He is also a great, kind and generous guy. Our company is only a start-up company with 10 developers. Among them, only 3-4 people knows about the business logic behind the ERP, and I am one of them. If I switch my current job, it may hamper the development of this product which I really don't want. I couldn't decide what to do in this situation, so I turned to the community for advice.

    Read the article

  • ETPM Environment Health Monitoring Tools

    - by Paula Speranza-Hadley
    This post is to provide some useful information about the tools typically used by Oracle ETPM implementations for performance tuning and analysis.   This includes tools to monitor and gather performance information and statistics on the Database, Application Server, and Client (browser).  Enterprise Monitoring Tools Oracle Enterprise Manager - OEM Grid Control comes with a comprehensive set of performance and health metrics that allow monitoring of key components in your environment such as applications, application servers, databases, as well as the back-end components on which they rely, such as hosts, operating systems and storage. Tools for the Database Oracle Diagnostics Pack Automatic Workload Repository (AWR)  - this tool gets statistics from memory abut the Time Model or DB Time, Wait Events, Active Session History and High Load SWL queries Automatic Database Diagnostic Monitor (ADDM) - This self-diagnostic software is built into the database.  It examines and analyzes data captured in AWR to dertermine possible performance issues.  It locates the root cause of the issue, provides recommendations for correcting the issues and qualifies the expected benefit. Oracle Database Tuning Pack SQL Tuning Advisor - This enables you to submit one or more SQL statements as input and receive output in the form of specific advice or recommendations on how to tune statements.  The recommendation relates to collection of statistics on objects, creation on new indexes and restructuring of SQL statements. SQL Access Advisor - This enables you to optimize data access paths of SQL queries by recommending a proper set of materialized views, indexes and partitions for a given SQL workload. Tools for the Application Server Weblogic Console - is a web-based, user interface used to configure and control a set of WebLogic servers or clusters (i.e. a "domain").  In any logical group of WebLogic servers there must exist one admin server, which hosts the WebLogic Admin console application and manages the associated configuratoin files. WebLogic Administrators will use the Administration Console for a number of tasks, including: Starting and stopping WebLogic servers or entire clusters. Configuring server parameters, security, database connections and deployed applications. Viewing server status, health and metrics. Yourkit for Profiling - helps analyze synchronization issues, including: Which threads were calling wait(), and for how long Which threads were blocked on attempt to acquire a monitor held by another thread (synchronized methods/blocks), and for how long Tools for the Client Fiddler - allows you to inspect traffic logs, debug and set breakpoints. Firebug – allows you to inspect and edit HTML, monitor network activity and debug JavaScript

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • SQL SERVER – Working with FileTables in SQL Server 2012 – Part 2 – Methods to Insert Data Into Table

    - by pinaldave
    Read Part 1 Working with FileTables in SQL Server 2012 – Part 1 – Setting Up Environment In this second part of the series, we will see how we can insert the files into the FileTables. There are two methods to insert the data into FileTables: Method 1: Copy Paste data into the FileTables folder First, find the folder where FileTable will be storing the files. Go to Databases >> Newly Created Database (FileTableDB) >> Expand Tables. Here you will see a new folder which says “FileTables”. When expanded, it gives the name of the newly created “FileTableTb”. Right click on the newly created table, and click on “Explore FileTable Directory”. This will open up the folder where the FileTable data will be stored. When clicked on the option it will open up following folder in my local machine where the FileTable data will be stored. \\127.0.0.1\mssqlserver\FileTableDB\FileTableTb_Dir You can just copy your document just there. I copied few word document there and ran select statement to see the result. USE [FileTableDB] GO SELECT * FROM FileTableTb GO SELECT * returns all the rows. Here is SELECT statement which has only few columns selected from FileTable. SELECT [name] ,[file_type] ,CAST([file_stream] AS VARCHAR) FileContent ,[cached_file_size] ,[creation_time] ,[last_write_time] ,[last_access_time] FROM [dbo].[FileTableTb] GO I believe this is the simplest method to populate FileTable, because you just have to move the files to the specific table. Method 2: T-SQL Insert Statement There are always cases when you might want to programmatically insert the images into SQL Server File table. Here is a quick method which you can use to insert the data in the file table. I have inserted a very small text file using T-SQL, and later on, reading that using SELECT statement demonstrated in method 1 above. INSERT INTO [dbo].[FileTableTb] ([name],[file_stream]) SELECT 'NewFile.txt', * FROM OPENROWSET(BULK N'd:\NewFile.txt', SINGLE_BLOB) AS FileData GO The above T-SQL statement will copy the NewFile.txt to new location. When you run SELECT statement, it will retrieve the file and list in the resultset. Additionally, it returns the content in the SELECT statement as well. I think it is a pretty interesting way to insert the data into the FileTable. SELECT [name] ,[file_type] ,CAST([file_stream] AS VARCHAR) FileContent ,[cached_file_size] ,[creation_time] ,[last_write_time] ,[last_access_time] FROM [dbo].[FileTableTb] GO There are more to FileTable and we will see those in my future blog posts. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Filestream

    Read the article

  • Why learn Flash Builder 4 (Flex) when I can just use Flash Professional?

    - by Jason McKenna
    I want to learn Flash Builder 4 (Flex) because I see sooo many jobs requesting experience with it. i also just like knowing stuff. I am also very interested in focusing on RIA development now. BUT... can anyone tell me CLEARLY why the heck I would ever use FLEX over Flash Pro?? it is a time investment, so is it worth it? All I read are misguided posts about how Flash Pro is for games and banner ads, and Flex is for programmers and RIAs blah blah... this simply isn't so from my 9 years of contracting experience. I'm 99.9% certain that I can build anything a flex developer can build, but using Flash Pro. I can build powerful AS3-driven apps for the desktop, mobile device, or browser, and I can link to databases with XML and I can import text files and communicate with ColdFusion and everything. The advantage with Flash Pro is that I can also easily and clearly animate transitions and build custom elements that look the way I want/need them to look for my specific client. Why would I want to use a bunch of pre-built components that drive my file sizes to the moon?? Who is happy with a drag-n-drop button?? Is Flex just a thing made for programmer people with no artistic inclination? What is the advantage of using it?? It takes me back to Visual Basic class. Seems like a pain to have to use multiple tools to import crap from Flash Pro into Flex and yada yada... why when I can do it all nicely in Flash Pro to begin with. Am I clueless, or missing some major piece of the puzzle? Thanks for any clarity. PS, I couldn't care less about the code editors. It aint that bad people. They make it out like the thing doesn't even respond to keyboard input or something. Does everthing I need it do anyways. Please help out here. If I just dont need to learn it, I dont want to waste the time. Jase

    Read the article

  • It's called College.

    - by jeffreyabecker
    Today I saw yet another 'GUID vs int as your primary key' article. Like most of the ones I've read this was filled with technical misrepresentations and out-right fallices. Chef's famous line that "There's a time and a place for everything children" applies here. GUIDs have distinct advantages and disadvantages which should be considered when choosing a data type for the primary key. Fallacy 1: "Its easier" An integer data type(tinyint, smallint, int, bigint) is a better artifical key than a GUID because its easier to remember. I'm a firm believer that your artifical primary keys should be opaque gibberish. PK's are an implementation detail which should never be exposed to the user or relied on for business logic. If you want things to come back in an order, add and ORDER BY clause and SortOrder fields. If you want a human-usable look-up add a business key with a unique constraint. If you want to know what order things were inserted into a table add a timestamp. Fallacy 2: "Size Matters" For many applications, the size of the artifical primary key is going to be irrelevant. The particular article which kicked this post off stated repeatedly that joining against an int has better performance than joining against a GUID. In computer science the performance of your algorithm is always a function of the number of data points. This still holds true for databases. Unless your table is very large, the performance difference between an int and a guid probably isnt going to be mesurable let alone noticeable. My personal experience is that the performance becomes an issue when you start having billions of rows in the table. At this point, you should probably start looking to move from int to bigint so the effective space/performance gain isnt as much as you'd think. GUID Advantages: Insert-ability / Mergeability: You can reliably insert guids into tables without key collisions. Database Independence: Saving entities to the database often requires knowing ids. With identity based ids the id must be selected back after every insert. GUIDs can be generated application-side allowing much faster inserts. GUID Disadvantages: Generatability: You can calculate the next id for an integer pk pretty easily in your head but will need a program to generate GUIDs. Solution: "Select top 100 newid() from sysobjects" Fragmentation: most GUID generation algorithms generate pseudo random GUIDs. This can cause inserts into the middle of your clustered index. Solutions: add a default of newsequentialid() or use GuidComb in NHibernate.

    Read the article

  • SQL Server MVP Deep Dives 2. The Awesome Returns.

    - by Mladen Prajdic
    Two years ago 59 SQL Server MVP's came together and helped make one of the best book on SQL Server out there. Each chapter was written by an MVP about a part of SQL Server they loved working with. This resulted in superb quality content and excellent ratings from the readers. To top it off all earnings went to a good cause, the War Child International organization. That book was SQL Server MVP Deep Dives. This year 63 SQL Server MVPs, me included, decided it was time do repeat the success of the first book. Let me introduce you the: SQL Server MVP Deep Dives 2 The topics in 60 chapters are grouped in 5 groups: Architecture, Database Administration, Database Development, Performance Tuning and Optimization, Business Intelligence. They represent over 1000 years of daily experience in various areas of SQL Server. I have contributed chapter 28 in Database Development group titled Getting asynchronous with Service Broker. In it I show you the Service Broker template you can use for secure communication between two or more SQL server instances for whatever purpose you may have. If you haven't heard of Service Broker it's a part of the database engine that enables you to do completely async operations in the database itself or between databases and instances. The official release of the book will be next week at PASS where there will be 2 slots where most of the authors will be there signing the books you bring. This is also a great opportunity to meet everyone and ask about any problems you may have. So definitely come say hi. Again we decided on a charity that will be supported by this book. It's called Operation Smile. They provide free surgeries to repair cleft lip, cleft palate and other facial deformities for children around the globe. You can also help them by donating. You can preorder it on at Manning Publications website or on Amazon. By having it you not only get to learn a lot, improve your skills and have fun but you also help a child have a normal life. If that's not a good cause then I don't know what it is.

    Read the article

  • SSAS Tabular Workshop online and other upcoming dates (and updates!) #ssas #tabular

    - by Marco Russo (SQLBI)
    After many conferences and travels, this summer I had some time to write and prepare new sessions for the next wave of conferences. In reality I am just doing that, even if I already restarted traveling for consulting and training. So expect new content about DAX and Tabular coming in the next months! Starting to see real customer adopting Tabular is showing many new challenges and there is still a lot to learn and to create. If you still didn’t started working on Tabular, well, you should. As I always say, as a BI developer you should be able to choose between Tabular and Multidimensional, and in order to do that you should know both of them! One thing that I don’t like very much about marketing is that “Tabular is simpler”, because it’s often translated in “Tabular is for simple projects” when this last statement is not true. Actually, I see a lot of good reasons to adopt Tabular in complex data models, especially in non-traditional scenarios. I know, this is because I love to understand what are the actual limits of a technology, and I’m learning that there is simple a lot of space of improvement also for Tabular. It’s already fast, but it could be faster! How can you start? Well, first of all, by reading our book. Then, by attending to our SSAS Tabular workshop. There is an online edition of the workshop on September 3-4, 2012 (hurry up if you want to register), and there are already several dates planned for the next months (and others will be added soon!). And, of course, by installing SQL Server 2012 and trying to create models over your databases. If you are too lazy, just start with PowerPivot. As soon as you start working with Tabular or PowerPivot, you will see that there is one important skill you need: learning DAX. In the next few days I should publish an article that I’m finishing these days about best practices using SUMMARIZE and ADDCOLUMNS. If only someone published this article one year ago, I would have saved many hours of my life. But, you know, flight manuals are written in blood… and someone has to write! Stay tuned.

    Read the article

  • ArchBeat Link-o-Rama for July 3, 2013

    - by Bob Rhubart
    Industrial SOA Chapter 5: Enterprise Service Bus Enterprise Service Bus, the fifth and latest addition to the Industrial SOA article series, answers some of the most important questions surrounding the use of an ESB. Industrial SOA Chapter 4: SOA Maturity The fourth article in the Industrial SOA series, SOA Maturity offers "an exploration of the fundamentals of applying a factory approach to modern service-oriented software development." Using the Exalytics Summary Advisor and Oracle BI Apps 7.9.6.4 | Mark Rittman Oracle ACE Director Mark Rittman's post revisits "the use of the Summary Advisor, with my BI Apps installation bumped-up to version 7.9.6.4, and the Exalytics environment patched up to 11.1.1.6.9, the latest patch release we’ve applied to that environment." Part 1 - 12c Database and WLS - Overview | Steve Felts Steve Felts shares a handy table that "maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases." Developers WebCast: Deploy Highly-Available Custom Services on Your Data Grid Products - July 11 Oracle Coherence Sr. Architect Brian Oliver hosts this free July 11 webcast for developers to show you how to "create and deploy customized, highly-available services for your data grid, and how real-time data processing will allow you to provide unmatched end-user experiences." A checklist for OIM go live | Daniel Gralewski FMW A-Team solution architect Daniel Gralewski's list is intended to complement Oracle Identity Manager. His post "provides tips on a few topics that are not part of the documentation." How Many ODI Master Repositories Should We Have? | Christophe Dupupet FMW solution architect Christophe Dupupet provides a simple along with best practices for the architecture of ODI repositories in a corporate environment. Distinguish EA from enterprise wide solution architecture | John Wu My buddy Tony Meyer, who did a great presentation recently at the Cleveland-area Enterprise Architect / Solution Architect Meet-up, recommends this Toolbox article by John Wu. YouTube: Oracle Fusion Applications Developer Tips If you work with Fusion Applications you'll want to check out the tips and tricks for building extensions, customizations, and integrations now available on the new Oracle Fusion Middleware Developer Relations YouTube channel. The CX Factor: Wooing and wowing customers in the digital age "There was a time when 'customer experience' was limited to what happened to you when you walked into a store, restaurant, or other place of business or when you called a business on the telephone. But that was back when you could still smoke on airplanes." Thought for the Day "If you had to identify, in one word, the reason why the human race has not achieved, and never will achieve, its full potential, that word would be 'meetings.' " — Dave Barry (Born July 3, 1947) Source: brainyquote.com

    Read the article

  • Building Private IaaS with SPARC and Oracle Solaris

    - by ferhat
    A superior enterprise cloud infrastructure with high performing systems using built-in virtualization! We are happy to announce the expansion of Oracle Optimized Solution for Enterprise Cloud Infrastructure with Oracle's SPARC T-Series servers and Oracle Solaris.  Designed, tuned, tested and fully documented, the Oracle Optimized Solution for Enterprise Cloud Infrastructure now offers customers looking to upgrade, consolidate and virtualize their existing SPARC-based infrastructure a proven foundation for private cloud-based services which can lower TCO by up to 81 percent(1). Faster time to service, reduce deployment time from weeks to days, and can increase system utilization to 80 percent. The Oracle Optimized Solution for Enterprise Cloud Infrastructure can also be deployed at up to 50 percent lower cost over five years than comparable alternatives(2). The expanded solution announced today combines Oracle’s latest SPARC T-Series servers; Oracle Solaris 11, the first cloud OS; Oracle VM Server for SPARC, Oracle’s Sun ZFS Storage Appliance, and, Oracle Enterprise Manager Ops Center 12c, which manages all Oracle system technologies, streamlining cloud infrastructure management. Thank you to all who stopped by Oracle booth at the CloudExpo Conference in New York. We were also at Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, discussing how this solution can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. Designed, tuned, and tested, Oracle Optimized Solution for Enterprise Cloud Infrastructure is a complete cloud infrastructure or any virtualized environment  using the proven documented best practices for deployment and optimization. The solution addresses each layer of the infrastructure stack using Oracle's powerful SPARC T-Series as well as x86 servers with storage, network, virtualization, and management configurations to provide a robust, flexible, and balanced foundation for your enterprise applications and databases.  For more information visit Oracle Optimized Solution for Enterprise Cloud Infrastructure. Solution Brief: Accelerating Enterprise Cloud Infrastructure Deployments White Paper: Reduce Complexity and Accelerate Enterprise Cloud Infrastructure Deployments Technical White Paper: Enterprise Cloud Infrastructure on SPARC (1) Comparison based on current SPARC server customers consolidating existing installations including Sun Fire E4900, Sun Fire V440 and SPARC Enterprise T5240 servers to latest generation SPARC T4 servers. Actual deployments and configurations will vary. (2) Comparison based on solution with SPARC T4-2 servers with Oracle Solaris and Oracle VM Server for SPARC versus HP ProLiant DL380 G7 with VMware and Red Hat Enterprise Linux and IBM Power 720 Express - Power 730 Express with IBM AIX Enterprise Edition and Power VM.

    Read the article

  • MySQL Enterprise Backup 3.8.2 - Overview

    - by Priya Jayakumar
      MySQL Enterprise Backup (MEB) is the ideal solution for backing up MySQL databases. MEB 3.8.2 is released in June 2013. MySQL Enterprise Backup 3.8.2 release’s main goal is to improve usability. With this release, users can know the progress of backup completed both in terms of size and as a percentage of the total. This release also offers options to be able to manage the behavior of MEB in case the space on the secondary storage is completely exhausted during backup. The progress indicator is a (short) string that indicates how far the execution of a time-consuming MEB command has progressed. It consists of one or more "meters" that measures the progress of the command. There are two options introduced to control the progress reporting function of mysqlbackup command (1) –show-progress (2) –progress-interval. The user can control the progress indicator by using “--show-progress” option in any of the MEB operations. This option instructs MEB to output periodically short reports on the progress of time-consuming commands. The argument of this option instructs where the output could be sent. For example it could be stderr, stdout, file, fifo and table. With the “--show-progress” option both the total size of the backup to be copied and the size that’s already copied will be shown. Along with this, the state of the operation for example data or meta-data being copied or tables being locked and other such operations will also be reported. This gives more clear information to the DBA on the progress of the backup that’s happening. Interval between progress report in seconds is controlled by “--progress-interval” option. For more information on this please refer progress-report-options. MEB can also be accessed through GUI from MySQL WorkBench’s next version. This can be used as the front end interface for MEB users to perform backup operations at the click of a button. This feature was highly requested by DBAs and will be very useful. Refer http://insidemysql.com/mysql-workbench-6-0-a-sneak-preview/ for WorkBench upcoming release info. Along with the progress report feature some of the important issues like below are also addressed in MEB 3.8.2. In MEB 3.8.2 a new command line option “--on-disk-full” is introduced to abort or warn the user when a backup process encounters a full disk condition. When no option is given, by default it would abort. A few issues related to “incremental-backup” are also addressed in this release. Please refer 3.8.2 documentation for more details. It would be good for MEB users to move to 3.8.2 to take incremental backups. Overall the added usability and the important defects fixed in this release makes MySQL Enterprise Backup 3.8.2 a promising release.  

    Read the article

  • New hidden parameters in Oracle 11.2

    - by Mike Dietrich
    We really welcome every external review of our slides. And also recommendations from customers visiting our workshops. So it happened to me more than a week ago that Marco Patzwahl, the owner of MuniqSoft GmbH, had a very lengthy train ride in Germany (as the engine drivers go on strike this week it could have become even worse) and nothing better to do then reviewing our slide set. And he had plenty of recommendations. Besides that he pointed us to something at least I was not aware of and added it to the slides: In patch set 11.2.0.2 a new behaviour for datafile write errors has been implemented. With this release ANY write error to a datafile will cause the instance to abort. Before 11.2.0.2 those errors usually led to an offline datafile if the database operates in archivelog mode (your production database do, don’t they?!) and the datafile does not belong to the SYSTEM tablespace. Internal discussion found this behaviour not up-to-date and alligned with RAC systems and modern storages. Therefore it has been changed and a new underscore parameter got introduced. _DATAFILE_WRITE_ERRORS_CRASH_INSTANCE=TRUE This is the default setting´and the new behaviour beginning with Oracle 11.2.0.2 If you would like to revert to the pre-11.2.0.2 behaviour you’ll have to set in your init.ora/spfile this parameter to false. But keep in mind that there’s a reason why this has been changed. You’ll find more info in MOS Note: 7691270.8 and this topic in the current version of the slides on slide 255. Thanks to Marco for the review!!   And then I received an email from Kurt Van Meerbeeck today. Kurt is pretty well known in the Oracle community. And he’s the owner of jDUL/DUDE, a database unloading tool which bypasses the Oracle database engine and access data direclty from the blocks. Kurt visited the upgrade workshop two weeks ago in Belgium and did highlight to me that since Oracle 11.2.0.1 even though you haven’t set neither SGA_TARGET nor MEMORY_TARGET the database might still do resize operations. Reason why this behaviour has been changed: Prevention of ORA-4031 errors. But on databases with extremly high loads this can cause trouble. Further information can be found in MOS Note:1269139.1 . And the parameter set to TRUE by default is called _MEMORY_IMM_MODE_WITHOUT_AUTOSGA=TRUE This can be found now in the slide set as well on slide number 240. And thanks to Kurt for this information!!

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • Should CSS be listed on your resume under Languages?

    - by Sandeepan Nath
    I have some doubts like Whether CSS should be put under Languages or not? Although Wikipedia says Cascading Style Sheets (CSS) is a style sheet language ... But do they write CSS under the languages section of the resume, along with PHP, etc? Similarly what about HTML? I have some doubt and I don't want to sound like someone who is not aware of the trends. Just to give an example, currently I have the following languages,frameworks, technologies, etc. listed under the "Technical Expertise" section of my resume - Technical Expertise * Languages - Proficient - PHP 5, Javascript, HTML ?*,CSS ?*,Sass ?*. Beginner - Linux Bash. * Databases - MySQL 5. * Technologies - AJAX. * Frameworks/Libraries - Symfony, jQuery. * CMSes - Wordpress. Although my domain is Web-development/design, I welcome domain-agnostic answers which can provide some generic ideas/reasoning. I have seen, a lot of people messing up these sections (even more serious than my doubts :) ), putting things under wrong sub-headings and thus putting a big question mark on their understanding of those things. I don't know much about XML, Comet Technology etc. Considering those are included too, What things should be put under Languages? E.g. Should CSS be put under Languages? Please give some reasoning to support your views. Where should the others (XML, Comet, cURL etc. ) be put? I welcome some examples of how you put it. Or do you have an additional Keywords section where you write all the unsortables ? Considering a set of standards like W3C standards, etc. do you have a standards sub-heading? I guess I have put the contents of other sections Okay. But do let me know about your ideas and reasoning. After all, I understand there may not be a single answer to this, but let's see what is the trend. Thanks Updates Further, do you mention design patters you have used? Web Services etc.? Where do you mention SOAP, XML etc...

    Read the article

  • Happy New Year! Upcoming Events in January 2011

    - by mandy.ho
    Oracle Database kicks off the New Year at the following events during the month of January. Hope to see you there and please send in your pictures and feedback! Jan 20, 2011 - San Francisco, CA LinkShare Symposium West 2011 Oracle is a proud Gold Sponsor at the LinkShare Symposium West 2011 January 20 in San Francisco, California. Year after year LinkShare has been bringing their network the opportunity to come to life. At the LinkShare Symposium online performance marketing leaders meet to optimize face-to-face during a full day of networking. Learn more by attending Oracle Breakout Session, "Omni - Channel Retailing, What is possible now?" on Thursday, January 20, 11:15 a.m. - 12:00 noon, Grand Ballroom. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=128306&src=6954634&src=6954634&Act=397 Jan 24, 2011 - Cincinnati, OH Greater Cincinnati Oracle User Group Meeting "Tom Kyte Day" - Featuring a day of sessions presented by Senior Technical Architect, Tom Kyte. Sessions include "Top 10, no 11, new features of Oracle Database 11g Release 2" and "What do I really need to know when upgrading", plus more. http://www.gcoug.org/ Jan 25, 2011 - Vancouver, British Columbia Oracle Security Solutions Forum Featuring a Special Keynote Presentation from Tom Kyte - Complete Database Security Join us at this half-day event; Oracle Database Security Solutions: Complete Information Security. Learn how Oracle Database Security solutions help you: • Prevent external threats like SQL injection attacks from reaching your databases • Transparently encrypt application data without application changes • Prevent privileged database users and administrators from accessing data • Use native database auditing to monitor and report on database activity • Mask production data for safe use in nonproduction environments http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=126974&src=6958351&src=6958351&Act=97 Jan 26, 2011 - Halifax, Nova Scotia Oracle Database Security Technology Day Exclusive Seminar on Complete Information Security with Oracle Database 11g The amount of digital data within organizations is growing at unprecedented rates, as is the value of that data and the challenges of safeguarding it. Yet most IT security programs fail to address database security--specifically, insecure applications and privileged users. So how can you protect your mission-critical information? Avoid risky third-party solutions? Defend against security breaches and compliance violations? And resist costly new infrastructure investments? Join us at this half-day seminar, Oracle Database Security Solutions: Complete Information Security, to find out http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=126269&src=6958351&src=6958351&Act=93

    Read the article

  • Why learn Flash Builder 4 (Flex) when I can just use Flash Professional?

    - by Jason McKenna
    I want to learn Flash Builder 4 (Flex) because I see so many jobs requesting experience with it. I also just like knowing stuff. I am also very interested in focusing on RIA development now. BUT... can anyone tell me CLEARLY why the heck I would ever use FLEX over Flash Pro? It is a time investment, so is it worth it? All I read are misguided posts about how Flash Pro is for games and banner ads, and Flex is for programmers and RIAs blah blah... this simply isn't so from my 9 years of contracting experience. I'm 99.9% certain that I can build anything a flex developer can build, but using Flash Pro. I can build powerful AS3-driven apps for the desktop, mobile device, or browser, and I can link to databases with XML and I can import text files and communicate with ColdFusion and everything. The advantage with Flash Pro is that I can also easily and clearly animate transitions and build custom elements that look the way I want/need them to look for my specific client. Why would I want to use a bunch of pre-built components that drive my file sizes to the moon? Who is happy with a drag-n-drop button? Is Flex just a thing made for programmer people with no artistic inclination? What is the advantage of using it? It takes me back to Visual Basic class. Seems like a pain to have to use multiple tools to import crap from Flash Pro into Flex and yada yada... why when I can do it all nicely in Flash Pro to begin with. Am I clueless, or missing some major piece of the puzzle? Thanks for any clarity. PS, I couldn't care less about the code editors. It ain't that bad people. They make it out like the thing doesn't even respond to keyboard input or something. Does everything I need it do anyways. Please help out here. If I just don't need to learn it, I don't want to waste the time.

    Read the article

  • Bay Area Coherence Special Interest Group Next Meeting July 21, 2011

    - by csoto
    Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET (note that Parking at 475 Sansome Closes at 8:30pm) Where: Oracle Office, 475 Sansome Street, San Francisco, CA Google Map We will be providing snacks and beverages. Register! - Registration is required for building security. Presentation Line Up:? 5:10pm - Batch Processing Using Coherence in Oracle Group Policy Administration - Paul Cleary, Oracle Oracle Insurance Policy Administration (OIPA) is a flexible, rules-based policy administration solution that provides full record keeping for all policy lifecycle transactions. One component of OIPA is Cycle processing, which is the batch processing of pending insurance transactions. This presentation introduces OIPA and Cycle processing, describing the unique challenges of processing a high volume of transactions within strict time windows. It then reviews how OIPA uses Oracle Coherence and the Processing Pattern to meet these challenges, describing implementation specifics that highlight the simplicity and robustness of the Processing Pattern. 6:10pm - Secure, Optimize, and Load Balance Coherence with F5 - Chris Akker, F5 F5 Networks, Inc., the global leader in Application Delivery Networking, helps the world’s largest enterprises and service providers realize the full value of virtualization, cloud computing, and on-demand IT. Recently, F5 and Oracle partnered to deliver a novel solution that integrates Oracle Coherence 3.7 with F5 BIG-IP Local Traffic Manager (LTM). This session will introduce F5 and how you can leverage BIG-IP LTM to secure, optimize, and load balance application traffic generated from Coherence*Extend clients across any number of servers in a cluster and to hardware-accelerate CPU-intensive SSL encryption. 7:10pm - Using Oracle Coherence to Enable Database Partitioning and DC Level Fault Tolerance - Alexei Ragozin, Independent Consultant and Brian Oliver, Oracle Partitioning is a very powerful technique for scaling database centric applications. One tricky part of partitioned architecture is routing of requests to the right database. The routing layer (routing table) should know the right database instance for each attribute which may be used for routing (e.g. account id, login, email, etc): it should be fast, it should fault tolerant and it should scale. All the above makes Oracle Coherence a natural choice for implementing such routing tables in partitioned architectures. This presentation will cover synchronization of the grid with multiple databases, conflict resolution, cross cluster replication and other aspects related to implementing robust partitioned architecture. Additional Info:?? - Download Past Presentations: The presentations from the previous meetings of the BACSIG are available for download here. Click on the presentation titles to download the PDF files. - Join the Coherence online community on our Oracle Coherence Users Group on LinkedIn. - Contact BACSIG with any comments, questions, presentation proposals and content suggestions.

    Read the article

  • Great opportunity to try Windows Azure over the next 7 days if you are a UK developer &ndash; act to

    - by Eric Nelson
    Are you a UK based developer who has been put off from trying out the Windows Azure Platform? Were you concerned that you needed to hand over credit card details even to use the introductory offer? Or concerned about how many charges you might run up as you played with “elastic computing”. Then we might have just what you need. 7 Days of access to the Windows Azure Platform – for FREE (expires June 6th 2010) If you are accepted, you will be given a Windows Azure Platfom subscription that will enable you to create Windows Azure hosted services and storage accounts, SQL Azure databases and AppFabric services without any fear of being charged between now and Sunday the 6th of June 2010. No credit card is required. Important: At the end of Sunday your subscription and all your code and data you have uploaded will be deleted. It is your responsibility to keep local copies of your code and data. Apply now To apply for this offer you need to: email ukdev AT microsoft.com with a subject line that starts “UKAZURETRAIL:” (This must  be present) In the email you need to demonstrate you are UK based (.uk email alias or address or… be creative) And you must include 30 to 100 words explaining What your interest is in the Windows Azure Platform and Cloud Computing What you would use the 7 days to explore Some notes (please read!): We have a limited number of these offers to give away on a first come, first served basis (subject to meeting the above criteria). We plan to process all request asap – but there is a UK bank holiday weekend looming. We will do our best to process all by Tues afternoon (which would still give you 5 days of access) There will be no specific support for this offer. We will not be processing any requests that arrive after Tuesday 1st. In case you were wondering, there is no equivalent offer for developer outside of the UK. This offer is a direct result of UK based training we are currently doing which has some spare Azure capacity which we wanted to make best use of. Sorry in advance if you based outside of the UK. Related Links: If you are UK based, you should also join the UK Windows Azure Platform community http://ukazure.ning.com Microsoft UK Windows Azure Platform page

    Read the article

  • Installing MOSS 2007 on Windows 2008 R2

    - by Manesh Karunakaran
    When you try to install MOSS 2007 on Windows 2008 R2, if you are using an installation media that is older than SP2, you would get the following error, saying that “This program is blocked due to compatibility issues”    All is not lost though, all you need to do is to slip stream the SP2 updates to the MOSS 2007 Setup. Here’s a nice how to on how to do that. http://blogs.technet.com/seanearp/archive/2009/05/20/slipstreaming-sp2-into-sharepoint-server-2007.aspx Once you slipstream the SP2 updates, you would be able to continue with the installation with out the above error. HTH.   You may already read from blogs about April Cumulative Update for separate components in SharePoint. Now, the server-packages (also known as “Uber” packages) of April Cumulative Update for Microsoft Office SharePoint Server 2007 and Windows SharePoint Services 3.0 are ready for download. Download Information Windows SharePoint Services 3.0 April cumulative update package http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=968850 Office SharePoint Server 2007 April cumulative update package http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=968851 Detail Description Description of the Windows SharePoint Services 3.0 April cumulative update package http://support.microsoft.com/kb/968850 Description of the Office SharePoint Server 2007 April cumulative update package http://support.microsoft.com/kb/968851 Installation Recommendation for a fresh SharePoint Server To keep all files in a SharePoint installation up-to-date, the following sequence is recommended. Service Pack 2 for Windows SharePoint Services 3.0 Service Pack 2 for Office SharePoint Server 2007 April Cumulative Update package for Windows SharePoint Services 3.0 April Cumulative Update package for Office SharePoint Server 2007 Please note: Start from April Cumulative Update, the packages will no longer install on a farm without a service pack installed. You must have installed either Service Pack 1 (SP1) or SP2 prior to the installation of the cumulative updates. After applying the preceding updates, run the SharePoint Products and Technologies Configuration Wizard or “psconfig –cmd upgrade –inplace b2b -wait” in command line. This needs to be done on every server in the farm with SharePoint installed.  The version of content databases should be 12.0.6504.5000 after successfully applying these updates. For more in-depth guidance for the update process, we recommend that customers refer to the following articles. These articles provide a correct way to deploy updates, identify known issues (and resolutions), and provide information about creating slipstream builds. Deploy software updates for Windows SharePoint Services 3.0 http://technet.microsoft.com/en-us/library/cc288269.aspx Deploy software updates for Office SharePoint Server 2007 http://technet.microsoft.com/en-us/library/cc263467.aspx Create an installation source that includes software updates (Windows SharePoint Services 3.0) http://technet.microsoft.com/en-us/library/cc287882.aspx Create an installation source that includes software updates (Office SharePoint Server 2007) http://technet.microsoft.com/en-us/library/cc261890.aspx

    Read the article

  • What is new in Oracle SOA Suite 11g R1 PS6? by Shanny Anoep

    - by JuergenKress
    Oracle has released a new version 11.1.1.7.0 for their Oracle Fusion Middleware product line. This version includes Patch Set #6 (PS6) for Oracle SOA Suite 11g R1, with a big list of improvements and fixes for each component in that suite. In this post we will highlight some of the interesting updates with regards to troubleshooting, performance, reliability and scalability. Infrastructure/Purging scripts Database growth is a common problem for large-scale Oracle SOA Suite deployments. Oracle already provides multiple purging strategies for the SOA Suite runtime database. This patch set includes two new scripts for purging most of the runtime data: Table Recreation Script (TRS): This script can be used to reclaim as much database space as possible, while still retaining the open instances. It can be used as a corrective action for databases that grew excessively, for example when purging was not performed at all. This should be used as a single corrective action only; the script does not replace the normal purging scripts. Truncate script: Remove all records from the SOA Suite runtime tables without dropping the tables. This script can be used for cloning SOA Suite environments without copying the instance data, or for recreating test scenarios by cleaning all the runtime data. The Oracle SOA Suite Administrator's guide contains a table with the available purging strategies. Diagnostic dumps Using WLST you could already dump diagnostic information about various components of the SOA Suite. This version adds support to retrieve more information on BPEL and Adapters from the command-line. Diagnostic dumps for BPEL New diagnostic dumps are available for BPEL to get information on thread pools, average processing time for BPEL components, and average waiting times for asynchronous instances. This information can be very useful for performance analysis or troubleshooting. With WLST this information can be retrieved from the command-line and included for monitoring or reporting. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Suite PS6,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • Database Mirroring – deprecated

    - by fatherjack
    Do you use mirroring on any of your databases? Do you use mirroring on SQL Server Standard Edition? I do, as a way of having a stand-by server ready to take over if there is a problem with the live server so that business can continue despite whatever disaster may strike at our primary server location. In my experience it has been a great solution for us as it is simple to implement, reliable and predictable. Mirroring has been around since SQL Server 2005 sp1 but with the release of SQL Server 2012 mirroring has now been placed on the deprecation list. That’s right, Microsoft are removing this feature from SQL Server. SQL Server 2012 had lots of improvements and new features around this sort of technology – the High Availability, Disaster recovery and Always On features described in detail here by Brent Ozar and  Microsoft’s own Customer Service and Support SQL Server Engineers . Now the bad news, the HADRON features are pretty much all wrapped up in the Enterprise Edition of SQL Server 2012. This is going to be a big issue for people, like me, who are only on Standard Edition of earlier versions mostly due to our requirements and the budget (or lack thereof) required for Enterprise Edition licenses. No mirroring in Standard Edition means no upgrade. Don’t Panic. There are two stages of deprecation and they dont happen fast. The first stage – Deprecation Announcement- means that Microsoft have decided that there is a limited future for a particular feature and this is your cue that new projects and developments should not be implemented on this technology as it will cease to exist in the future. This is where mirroring currently stands. You have time to consider your options and start work on planning how you will move away from using this feature. This can be 2 or 3 versions of SQL Server, possibly more. The next stage is Deprecation Final Support - this is where you are on your last chance, When you see this then the next version of SQL Server will not have this feature in it so you need to implement your plans to move to an alternative solution. While these two phases are taking place Microsoft are open to feedback on how people use their products and if enough people make the case for mirroring (or an equivalent technology) to be in the Standard Edition then they may make changes rather than lose customers or have customers cease upgrading in order to keep the functionality they need. Denny Cherry (@MrDenny) has published an article on this same topic here with more detail than me so I wont go over old ground. All I will say is that you should read his article now and then follow the link to his own site where he is collecting peoples information on how they use mirroring in Standard Edition so that our voice can be put to Microsoft.  

    Read the article

  • New Slides - and a discussion about Dictionary Statistics

    - by Mike Dietrich
    First of all we have just upoaded a new version of the Upgrade and Migration Workshop slides with some added information. So please feel free to download them from here.The slides have one new interesting information which lead to a discussion I've had in the past days with a very large customer regarding their upgrades - and internally on the mailing list targeting an EBS database upgrade from Oracle 10.2 to Oracle 11.2. Why are we creating dictionary statistics during upgrade? I'd believe this forced dictionary statistics creation got introduced with the desupport of the Rule Based Optimizer in Oracle 10g. The goal: as RBO is not supported anymore we have to make sure that the data dictionary has fresh and non-stale statistics. Actually that would have led in Oracle 9i to strange behaviour in some databases - so in Oracle 9i this was strongly disrecommended. The upgrade scripts got hardcoded to create these stats. But during tests we had the following findings: It's important to create dictionary statistics the night before the upgrade. Not two weeks before, not 60 minutes before your downtime begins. But very close to the upgrade. From Oracle 10g onwards you'd just say: $ execute DBMS_STATS.GATHER_DICTIONARY_STATS; This is important to make sure you have fresh dictionary statistics during upgrade for performance reasons. Tests have shown that running an upgrade without valid dictionary statistics might slow down the whole upgrade by factors of 2x-3x. And it would be also a great idea post upgrade to create again fresh dictionary statistics when you've did suppress the stats creation during the upgrade process. Suppress? Yes, you could set this underscore parameter in the init.ora: _optim_dict_stats_at_db_cr_upg=FALSE to suppress the forced dictionary statistics collection during an upgrade. We believe strongly that (a) people using the default statistics creation process which will create dictionary statistics by default and (b) create fresh stats before upgrade on the dictionary. Therefore we find it save once you have followed our advice to use the underscore during upgrade. And we've taken out that forced statistics collection during upgrade in the next release of the database. Please note: If you are using the DBUA for the upgrade it will remove underscore parameters for the upgrade run to improve performance - which is generally a good idea. So you'll have to start the DBUA with that call: $ dbua -initParam "_optim_dict_stats_at_cb_cr_upg"=FALSE -Mike

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >