Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 77/210 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Few events I&rsquo;m speaking at in early 2013

    - by Mladen Prajdic
    2013 has started great and the SQL community is already brimming with events. At some of these events you can come say hi. I’ll be glad you do! These are the events with dates and locations that I know I’ll be speaking at so far.   February 16th: SQL Saturday #198 - Vancouver, Canada The session I’ll present in Vancouver is SQL Impossible: Restoring/Undeleting a table Yes, you read the title right. No, it's not about the usual "one table per partition" and "restore full backup then copy the data over" methods. No, there are no 3rd party tools involved. Just you and your SQL Server. Yes, it's crazy. No, it's not for production purposes. And yes, that's why it's so much fun. Prepare to dive into the world of data pages, log records, deletes, truncates and backups and how it all works together to get your table back from the endless void. Want to know more? Come and see! This is an advanced level session where we’ll dive into the internals of data pages, transaction log records and page restores.   March 8th-9th: SQL Saturday #194 - Exeter, UK In Exeter I’ll be presenting twice. On the first day I’ll have a full day precon titled: From SQL Traces to Extended Events - The next big switch This pre-con will give you insight into both of the current tracing technologies in SQL Server. The old SQL Trace which has served us well over the past 10 or so years is on its way out because the overhead and details it produces are no longer enough to deal with today's loads. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. Mastering Extended Events requires learning at least one new skill: XML querying. The second session I’ll have on Saturday titled: SQL Injection from website to SQL Server SQL Injection is still one of the biggest reasons various websites and applications get hacked. The solution as everyone tells us is simple. Use SQL parameters. But is that enough? In this session we'll look at how would an attacker go about using SQL Injection to gain access to your database, see its schema and data, take over the server, upload files and do various other mischief on your domain. This is a fun session that always brings out a few laughs in the audience because they didn’t realize what can be done.   April 23rd-25th: NTK conference - Bled, Slovenia (Slovenian website only) This is a conference with history. This year marks its 18th year running. It’s a relatively large IT conference that focuses on various Microsoft technologies like .Net, Azure, SQL Server, Exchange, Security, etc… The main session’s language is Slovenian but this is slowly changing so it’s becoming more interesting for foreign attendees. This year it’s happening in the beautiful town of Bled in the Alps. The scenery alone is worth the visit, wouldn’t you agree? And this year there are quite a few well known speakers present! Session title isn’t known yet.       May 2nd-4th: SQL Bits XI – Nottingham, UK SQL Bits is the largest SQL Server conference in Europe. It’s a 3 day conference with top speakers and content all dedicated to SQL Server. The session I’ll present here is an hour long version of the precon I’ll give in Exeter. From SQL Traces to Extended Events - The next big switch The session description is the same as for the Exeter precon but we'll focus more on how the Extended Events work with only a brief overview of old SQL Trace architecture.

    Read the article

  • LINQ and ordering of the result set

    - by vik20000in
    After filtering and retrieving the records most of the time (if not always) we have to sort the record in certain order. The sort order is very important for displaying records or major calculations. In LINQ for sorting data the order keyword is used. With the help of the order keyword we can decide on the ordering of the result set that is retrieved after the query.  Below is an simple example of the order keyword in LINQ.     string[] words = { "cherry", "apple", "blueberry" };     var sortedWords =         from word in words         orderby word         select word; Here we are ordering the data retrieved based on the string ordering. If required the order can also be made on any of the property of the individual like the length of the string.     var sortedWords =         from word in words         orderby word.Length         select word; You can also make the order descending or ascending by adding the keyword after the parameter.     var sortedWords =         from word in words         orderby word descending         select word; But the best part of the order clause is that instead of just passing a field you can also pass the order clause an instance of any class that implements IComparer interface. The IComparer interface holds a method Compare that Has to be implemented. In that method we can write any logic whatsoever for the comparision. In the below example we are making a string comparison by ignoring the case. string[] words = { "aPPLE", "AbAcUs", "bRaNcH", "BlUeBeRrY", "cHeRry"}; var sortedWords = words.OrderBy(a => a, new CaseInsensitiveComparer());  public class CaseInsensitiveComparer : IComparer<string> {     public int Compare(string x, string y)     {         return string.Compare(x, y, StringComparison.OrdinalIgnoreCase);     } }  But while sorting the data many a times we want to provide more than one sort so that data is sorted based on more than one condition. This can be achieved by proving the next order followed by a comma.     var sortedWords =         from word in words         orderby word , word.length         select word; We can also use the reverse() method to reverse the full order of the result set.     var sortedWords =         from word in words         select word.Reverse();                                 Vikram

    Read the article

  • SQL SERVER – Query Hint – Contest Win Joes 2 Pros Combo (USD 198) – Day 1 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following queries will return dirty data? a) SELECT * FROM Table1 (READUNCOMMITED) b) SELECT * FROM Table1 (NOLOCK) c) SELECT * FROM Table1 (DIRTYREAD) d) SELECT * FROM Table1 (MYLOCK) Query Hints: BIG HINT POST Most SQL people know what a “Dirty Record” is. You might also call that an “Intermediate record”. In case this is new to you here is a very quick explanation. The simplest way to describe the steps of a transaction is to use an example of updating an existing record into a table. When the insert runs, SQL Server gets the data from storage, such as a hard drive, and loads it into memory and your CPU. The data in memory is changed and then saved to the storage device. Finally, a message is sent confirming the rows that were affected. For a very short period of time the update takes the data and puts it into memory (an intermediate state), not a permanent state. For every data change to a table there is a brief moment where the change is made in the intermediate state, but is not committed. During this time, any other DML statement needing that data waits until the lock is released. This is a safety feature so that SQL Server evaluates only official data. For every data change to a table there is a brief moment where the change is made in this intermediate state, but is not committed. During this time, any other DML statement (SELECT, INSERT, DELETE, UPDATE) needing that data must wait until the lock is released. This is a safety feature put in place so that SQL Server evaluates only official data. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 1. SQL Joes 2 Pros Development Series – Dirty Records and Table Hints SQL Joes 2 Pros Development Series – Row Constructors SQL Joes 2 Pros Development Series – Finding un-matching Records SQL Joes 2 Pros Development Series – Efficient Query Writing Strategy SQL Joes 2 Pros Development Series – Finding Apostrophes in String and Text SQL Joes 2 Pros Development Series – Wildcard – Querying Special Characters SQL Joes 2 Pros Development Series – Wildcard Basics Recap Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Employee Info Starter Kit: Project Mission

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is an open source ASP.NET project template that is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, it illustrates how to utilize Microsoft ASP.NET 4.0, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule. where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. User Stories The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee record Update an existing employee record Delete existing employee records Key Technology Areas ASP.NET 4.0 Entity Framework 4.0 T-4 Template Visual Studio 2010 Architectural Objective There is no universal architecture which can be considered as the best for all sorts of applications around the world. Based on requirements, constraints, environment, application architecture can differ from one to another. Trade-off factors are one of the important considerations while deciding a particular architectural solution. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. “Productivity” as the architectural objective typically also includes other trade-off factors as well as, such as testability, flexibility, performance etc. Fortunately Microsoft .NET Framework 4.0 and Visual Studio 2010 includes lots of great features that have been implemented cleverly in this project to reduce these trade-off factors in the minimum level. Why Employee Info Starter Kit is Not a Framework? Application frameworks are really great for productivity, some of which are really unavoidable in this modern age. However relying too many frameworks may overkill a project, as frameworks are typically designed to serve wide range of different usage and are less customizable or editable. On the other hand having implementation patterns can be useful for developers, as it enables them to adjust application on demand. Employee Info Starter Kit provides hundreds of “connected” snippets and implementation patterns to demonstrate problem solutions in actual production environment. It also includes Visual Studio T-4 templates that generate thousands lines of data access and business logic layer repetitive codes in literally few seconds on the fly, which are fully mock testable due to language support for partial methods and latest support for mock testing in Entity Framework. Why Employee Info Starter Kit is Different than Other Open-source Web Applications? Software development is one of the rapid growing industries around the globe, where the technology is being updated very frequently to adapt greater challenges over time. There are literally thousands of community web sites, blogs and forums that are dedicated to provide support to adapt new technologies. While some are really great to enable learning new technologies quickly, in most cases they are either too “simple and brief” to be used in real world scenarios or too “complex and detailed” which are typically focused to achieve a product goal (such as CMS, e-Commerce etc) from "end user" perspective and have a long duration learning curve with respect to the corresponding technology. Employee Info Starter Kit, as a web project, is basically "developer" oriented which actually considers a hybrid approach as “simple and detailed”, where a simple domain has been considered to intentionally illustrate most of the architectural and implementation challenges faced by web application developers so that anyone can dive into deep into the corresponding new technology or concept quickly. Roadmap Since its first release by 2008 in MSDN Code Gallery, Employee Info Starter Kit gained a huge popularity in ASP.NET community and had 1, 50,000+ downloads afterwards. Being encouraged with this great response, we have a strong commitment for the community to provide support for it with respect to latest technologies continuously. Currently hosted in Codeplex, this community driven project is planned to have a wide range of individual editions, each of which will be focused on a selected application architecture, framework or platform, such as ASP.NET Webform, ASP.NET Dynamic Data, ASP.NET MVC, jQuery Ajax (RIA), Silverlight (RIA), Azure Service Platform (Cloud), Visual Studio Automated Test etc. See here for full list of current and future editions.

    Read the article

  • SQL SERVER – Partition Parallelism Support in expressor 3.6

    - by pinaldave
    I am very excited to learn that there is a new version of expressor’s data integration platform coming out in March of this year.  It will be version 3.6, and I look forward to using it and telling everyone about it.  Let me describe a little bit more about what will be so great in expressor 3.6: Greatly enhanced user interface Parallel Processing Bulk Artifact Upgrading The User Interface First let me cover the most obvious enhancements. The expressor Studio user interface (UI) has had some significant work done. Kudos to the expressor Engineering team; the entire UI is a visual masterpiece that is very responsive and intuitive. The improvements are more than just eye candy; they provide significant productivity gains when developing expressor Dataflows. Operator shape icons now include a description that identifies the function of each operator, instead of having to guess at the function by the icon. Operator shapes and highlighting depict the current function and status: Disabled, enabled, complete, incomplete, and error. Each status displays an appropriate message in the message panel with correction suggestions. Floating or docking property panels provide descriptive tool tips for each property as well as auto resize when adjusting the canvas, without having to search Help or the need to scroll around to get access to the property. Progress and status indicators let you know when an operation is working. “No limit” canvas with snap-to-grid allows automatic sizing and accurate positioning when you have numerous operators in the Dataflow. The inline tool bar offers quick access to pan, zoom, fit and overview functions. Selecting multiple artifacts with a right click context allows you to easily manage your workspace more efficiently. Partitioning and Parallel Processing Partitioning allows each operator to process multiple subsets of records in parallel as opposed to processing all records that flow through that operator in a single sequential set. This capability allows the user to configure the expressor Dataflow to run in a way that most efficiently utilizes the resources of the hardware where the Dataflow is running. Partitions can exist in most individual operators. Using partitions increases the speed of an expressor data integration application, therefore improving performance and load times. With the expressor 3.6 Enterprise Edition, expressor simplifies enabling parallel processing by adding intuitive partition settings that are easy to configure. Bulk Artifact Upgrading Bulk Artifact Upgrading sounds a bit intimidating, but it actually is not and it is a welcome addition to expressor Studio. In past releases, users were prompted to confirm that they wanted to upgrade their individual artifacts only when opened. This was a cumbersome and repetitive process. Now with bulk artifact upgrading, a user can easily select what artifact or group of artifacts to upgrade all at once. As you can see, there are many new features and upgrade options that will prove to make expressor Studio quicker and more efficient.  I hope I’m not the only one who is excited about all these new upgrades, and that I you try expressor and share your experience with me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Threading Overview

    - by ACShorten
    One of the major features of the batch framework is the ability to support multi-threading. The multi-threading support allows a site to increase throughput on an individual batch job by splitting the total workload across multiple individual threads. This means each thread has fine level control over a segment of the total data volume at any time. The idea behind the threading is based upon the notion that "many hands make light work". Each thread takes a segment of data in parallel and operates on that smaller set. The object identifier allocation algorithm built into the product randomly assigns keys to help ensure an even distribution of the numbers of records across the threads and to minimize resource and lock contention. The best way to visualize the concept of threading is to use a "pie" analogy. Imagine the total workset for a batch job is a "pie". If you split that pie into equal sized segments, each segment would represent an individual thread. The concept of threading has advantages and disadvantages: Smaller elapsed runtimes - Jobs that are multi-threaded finish earlier than jobs that are single threaded. With smaller amounts of work to do, jobs with threading will finish earlier. Note: The elapsed runtime of the threads is rarely proportional to the number of threads executed. Even though contention is minimized, some contention does exist for resources which can adversely affect runtime. Threads can be managed individually – Each thread can be started individually and can also be restarted individually in case of failure. If you need to rerun thread X then that is the only thread that needs to be resubmitted. Threading can be somewhat dynamic – The number of threads that are run on any instance can be varied as the thread number and thread limit are parameters passed to the job at runtime. They can also be configured using the configuration files outlined in this document and the relevant manuals.Note: Threading is not dynamic after the job has been submitted Failure risk due to data issues with threading is reduced – As mentioned earlier individual threads can be restarted in case of failure. This limits the risk to the total job if there is a data issue with a particular thread or a group of threads. Number of threads is not infinite – As with any resource there is a theoretical limit. While the thread limit can be up to 1000 threads, the number of threads you can physically execute will be limited by the CPU and IO resources available to the job at execution time. Theoretically with the objects identifiers evenly spread across the threads the elapsed runtime for the threads should all be the same. In other words, when executing in multiple threads theoretically all the threads should finish at the same time. Whilst this is possible, it is also possible that individual threads may take longer than other threads for the following reasons: Workloads within the threads are not always the same - Whilst each thread is operating on the roughly the same amounts of objects, the amount of processing for each object is not always the same. For example, an account may have a more complex rate which requires more processing or a meter has a complex amount of configuration to process. If a thread has a higher proportion of objects with complex processing it will take longer than a thread with simple processing. The amount of processing is dependent on the configuration of the individual data for the job. Data may be skewed – Even though the object identifier generation algorithm attempts to spread the object identifiers across threads there are some jobs that use additional factors to select records for processing. If any of those factors exhibit any data skew then certain threads may finish later. For example, if more accounts are allocated to a particular part of a schedule then threads in that schedule may finish later than other threads executed. Threading is important to the success of individual jobs. For more guidelines and techniques for optimizing threading refer to Multi-Threading Guidelines in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support

    Read the article

  • Oracle Employees Support New World Record for IYF Children's Hour

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 960 students ‘crouched’, ‘touched’ and ‘set’ under the watchful eye of International Rugby Referee Alain Roland, and supported by Oracle employees, to successfully set a new world record for the World’s Largest Scrum to raise funds and awareness for the Irish Youth Foundation. Last year Oracle Employees supported the Irish Youth Foundation by donating funds from their payroll through the Giving Tree Appeal. We were the largest corporate donor to the IYF by raising €3075. To acknowledge our generosity the IYF asked Oracle Leadership in Society team members to participate in their most recent campaign which was to break the Guinness Book of Records by forming the World’s Largest Rugby Scrum. This was a wonderful opportunity for Oracle’s Leadership in Society to promote the charity, support education and to make a mark in the Corporate Social Responsibility field. The students who formed the scrum also gave up their lunch money and raised a total of €3000. This year we hope Oracle Employees will once again support the IYF with the challenge to match that amount. On the 24th of October the sun shone down on the streaming lines of students entering the field. 480 students were decked out in bright red Oracle T-Shirts against the other 480 in blue and white jerseys - all ready to form a striking scrum. Ryan Tubridy the host of the event made the opening announcement and with the blow of a whistle the Scum began. 960 students locked tight together with the Leinster players also at each side. Leinster Manager Matt O’Connor was there along with presenters Ryan Tubridy and George Hook to assist with getting the boys in line and keeping the shape of the scrum. In accordance with Guinness Book of Records rules, the ball was fed into the scrum properly by Ireland and Leinster scrum-half, Eoin Reddan, and was then passed out the line to his Leinster team mates including Ian Madigan, Brendan Macken and Jordi Murphy, also proudly sporting the Oracle T-Shirt. The new World Record was made, everyone gave a big cheer and thankfully nobody got injured! Thank you to everyone in Oracle who donated last year through the Giving Tree Appeal. Your generosity has gone a long way to support local groups both. Last year’s donation was so substantial that the IYF were able to spread it across two youth groups: The first being Ballybough Youth Project in Dublin. The funding gave them the chance to give 24 young people from their project the chance to get away from the inner city and the problems and issues they face in their daily life by taking a trip to the Cavan Centre to spend a weekend away in a safe and comfortable environment; a very rare holiday in these young people’s lives. The Rahoon Family Centre. Used the money to help secure the long term sustainability of their project. They act as an educational/social/fun project that has been working with disadvantaged children for the past 16 years. Their aim is to change young people’s future with fun /social education and supporting them so they can maximize their creativity and potential. We hope you can help support this worthy cause again this year, so keep an eye out for the Children’s Hour and Giving Tree Appeal! About the Irish Youth Foundation The IYF provides opportunities for marginalised children and young people facing difficult and extreme conditions to experience success in their lives. It passionately believes that achievement starts with opportunity. The IYF’s strategy is based on providing safe places where children can go after school; to grow, to learn and to play; and providing opportunities for teenagers from under-served communities to succeed and excel in their lives. The IYF supports innovative grassroots projects operated by dedicated professionals who understand young people and care about them. This allows the IYF to focus on supporting young people at risk of dropping out of school and, in particular, on the critical transition from primary to secondary school; and empowering teenagers from disadvantaged neighborhoods to become engaged in their local communities. Find out more here www.iyf.ie

    Read the article

  • Bug in Delphi XE RegularExpressions Unit

    - by Jan Goyvaerts
    Using the new RegularExpressions unit in Delphi XE, you can iterate over all the matches that a regex finds in a string like this: procedure TForm1.Button1Click(Sender: TObject); var RegEx: TRegEx; Match: TMatch; begin RegEx := TRegex.Create('\w+'); Match := RegEx.Match('One two three four'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Or you could save yourself two lines of code by using the static TRegEx.Match call: procedure TForm1.Button2Click(Sender: TObject); var Match: TMatch; begin Match := TRegEx.Match('One two three four', '\w+'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Unfortunately, due to a bug in the RegularExpressions unit, the static call doesn’t work. Depending on your exact code, you may get fewer matches or blank matches than you should, or your application may crash with an access violation. The RegularExpressions unit defines TRegEx and TMatch as records. That way you don’t have to explicitly create and destroy them. Internally, TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx is a class that needs to be created and destroyed like any other class. If you look at the TRegEx source code, you’ll notice that it uses an interface to destroy the TPerlRegEx instance when TRegEx goes out of scope. Interfaces are reference counted in Delphi, making them usable for automatic memory management. The bug is that TMatch and TGroupCollection also need the TPerlRegEx instance to do their work. TRegEx passes its TPerlRegEx instance to TMatch and TGroupCollection, but it does not pass the instance of the interface that is responsible for destroying TPerlRegEx. This is not a problem in our first code sample. TRegEx stays in scope until we’re done with TMatch. The interface is destroyed when Button1Click exits. In the second code sample, the static TRegEx.Match call creates a local variable of type TRegEx. This local variable goes out of scope when TRegEx.Match returns. Thus the reference count on the interface reaches zero and TPerlRegEx is destroyed when TRegEx.Match returns. When we call MatchAgain the TMatch record tries to use a TPerlRegEx instance that has already been destroyed. To fix this bug, delete or rename the two RegularExpressions.dcu files and copy RegularExpressions.pas into your source code folder. Make these changes to both the TMatch and TGroupCollection records in this unit: Declare FNotifier: IInterface; in the private section. Add the parameter ANotifier: IInterface; to the Create constructor. Assign FNotifier := ANotifier; in the constructor’s implementation. You also need to add the ANotifier: IInterface; parameter to the TMatchCollection.Create constructor. Now try to compile some code that uses the RegularExpressions unit. The compiler will flag all calls to TMatch.Create, TGroupCollection.Create and TMatchCollection.Create. Fix them by adding the ANotifier or FNotifier parameter, depending on whether ARegEx or FRegEx is being passed. With these fixes, the TPerlRegEx instance won’t be destroyed until the last TRegEx, TMatch, or TGroupCollection that uses it goes out of scope or is used with a different regular expression.

    Read the article

  • WebCenter Customer Spotlight: Ancestry.com

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAncestry.com Inc is the largest for-profit genealogy company in the world and it operates a network of genealogical and historical record websites focused on the U.S. and nine foreign countries, develops and markets genealogical software, and offers a wide array of genealogical related services. As of June 2012, the company provided access to more than 10 billion records, 38 million family trees, and 2 million paying subscribers. Their main business challenges were to improve time to market and agility to respond quickly to fast changing Internet waves while integrating with their existing content (4 PetaByte) and legacy systems. Ancestry.com implemented Oracle WebCenter Sites as their Web Experience Management System for their landing pages and marketing micro sites, added dynamic sections to their existing websites and integrated the existing content and legacy systems through web services. The Ancestry.com landing pages and marketing sites are now managed by the business team without any involvement of engineering resources. Managed content can quickly be added to existing pages without having to refactor the whole page and existing content (4 PetaBytes)  is now served trough Oracle WebCenter Sites without having to migrate from existing systems. Company OverviewAncestry.com Inc is a publicly traded Internet company (NASDAQ: ACOM) based in Provo, Utah, USA. The largest for-profit genealogy company in the world, it operates a network of genealogical and historical record websites focused on the U.S. and nine foreign countries, develops and markets genealogical software, and offers a wide array of genealogical related services. As of June 2012, the company provided access to more than 10 billion records, 38 million family trees, and 2 million paying subscribers. Business ChallengesAncestry main business challenge was to respond quickly to fast changing Internet waves.  Product marketing could not change Web site content without going through development. They needed dedicated developers just to support their marketing efforts. Technical Requirements Support current systems and environments - ASP.NET, MVC.NET, Java, JSP, PHP Scalable and manageable for a world wide network Marketing Requirements Easy to enter content – Without having a degree in HTML Scheduling of content – When is content visible to users Product Requirements Easy to manage content – See when content is out-of-date Rotation of content – Producing new content as old content expires Solution DeployedAncestry implemented  Oracle WebCenter Sites as their Web Experience Management System to manage their landing pages and marketing micro sites. This sites are fully managed by their business team without involvement of any engineering resources. The integration with their existing Web sites is done through Spot Management which allows the ability to add dynamic content to certain sections of a web page. The dynamic content is managed by  Oracle WebCenter Sites. The integration with the existing content (4 PetaBytes!) is done trough  a custom content provider interface which allows to mix existing content with content from  Oracle WebCenter Sites. Business ResultsAncestry.com has achieved following impressive business results: Landing pages and marketing sites are now managed by the business team without any involvement of engineering resources Managed content can quickly be added to existing pages without having to refactor the whole page Provide access to existing content (4 PetaBytes)  without having to migrate from existing systems Additional Information Ancestry Webcast Oracle WebCenter Sites

    Read the article

  • Data Security Through Structure, Procedures, Policies, and Governance

    Security Structure and Procedures One of the easiest ways to implement security is through the use of structure, in particular the structure in which data is stored. The preferred method for this through the use of User Roles, these Roles allow for specific access to be granted based on what role a user plays in relation to the data that they are manipulating. Typical data access actions are defined by the CRUD Principle. CRUD Principle: Create New Data Read Existing Data Update Existing Data Delete Existing Data Based on the actions assigned to a role assigned, User can manipulate data as they need to preform daily business operations.  An example of this can be seen in a hospital where doctors have been assigned Create, Read, Update, and Delete access to their patient’s prescriptions so that a doctor can prescribe and adjust any existing prescriptions as necessary. However, a nurse will only have Read access on the patient’s prescriptions so that they will know what medicines to give to the patients. If you notice, they do not have access to prescribe new prescriptions, update or delete existing prescriptions because only the patient’s doctor has access to preform those actions. With User Roles comes responsibility, companies need to constantly monitor data access to ensure that the proper roles have the most appropriate access levels to ensure users are not exposed to inappropriate data.  In addition this also protects rouge employees from gaining access to critical business information that could be destroyed, altered or stolen. It is important that all data access is monitored because of this threat. Security Governance Current Data Governance laws regarding security Health Insurance Portability and Accountability Act (HIPAA) Sarbanes-Oxley Act Database Breach Notification Act The US Department of Health and Human Services defines HIIPAA as a Privacy Rule. This legislation protects the privacy of individually identifiable health information. Currently, HIPAA   sets the national standards for securing electronically protected health records. Additionally, its confidentiality provisions protect identifiable information being used to analyze patient safety events and improve patient safety. In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The Database Breach Notification Act requires companies, in the event of a data breach containing personally identifiable information, to notify all California residents whose information was stored on the compromised system at the time of the event, according to Gregory Manter. He further explains that this act is only California legislation. However, it does affect “any person or business that conducts business in California, and that owns or licenses computerized data that includes personal information,” regardless of where the compromised data is located.  This will force any business that maintains at least limited interactions with California residents will find themselves subject to the Act’s provisions. Security Policies All companies must work in accordance with the appropriate city, county, state, and federal laws. One way to ensure that a company is legally compliant is to enforce security policies that adhere to the appropriate legislation in their area or areas that they service. These types of polices need to be mandated by a company’s Security Officer. For smaller companies, these policies need to come from executives, Directors, and Owners.

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • T-SQL (SCD) Slowly Changing Dimension Type 2 using a merge statement

    - by AtulThakor
    Working on stored procedure recently which loads records into a data warehouse I found that the existing record was being expired using an update statement followed by an insert to add the new active record. Playing around with the merge statement you can actually expire the current record and insert a new record within one clean statement. This is how the statement works, we do the normal merge statement to insert a record when there is no match, if we match the record we update the existing record by expiring it and deactivating. At the end of the merge statement we use the output statement to output the staging values for the update,  we wrap the whole merge statement within an insert statement and add new rows for the records which we inserted. I’ve added the full script at the bottom so you can paste it and play around.   1: INSERT INTO ExampleFactUpdate 2: (PolicyID, 3: Status) 4: SELECT -- these columns are returned from the output statement 5: PolicyID, 6: Status 7: FROM 8: ( 9: -- merge statement on unique id in this case Policy_ID 10: MERGE dbo.ExampleFactUpdate dp 11: USING dbo.ExampleStag s 12: ON dp.PolicyID = s.PolicyID 13: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 14: INSERT (PolicyID,Status) 15: VALUES (s.PolicyID, s.Status) 16: WHEN MATCHED --if it already exists 17: AND ExpiryDate IS NULL -- and the Expiry Date is null 18: THEN 19: UPDATE 20: SET 21: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 22: dp.Active = 0 -- and deactivate the existing record 23: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 24: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 25: WHERE -- we'll filter using a where clause 26: MergeAction = 'Update'; -- here   Complete source for example 1: if OBJECT_ID('ExampleFactUpdate') > 0 2: drop table ExampleFactUpdate 3:  4: Create Table ExampleFactUpdate( 5: ID int identity(1,1), 3: go 6: PolicyID varchar(100), 7: Status varchar(100), 8: EffectiveDate datetime default getdate(), 9: ExpiryDate datetime, 10: Active bit default 1 11: ) 12:  13:  14: insert into ExampleFactUpdate( 15: PolicyID, 16: Status) 17: select 18: 1, 19: 'Live' 20:  21: /*Create Staging Table*/ 22: if OBJECT_ID('ExampleStag') > 0 23: drop table ExampleStag 24: go 25:  26: /*Create example fact table */ 27: Create Table ExampleStag( 28: PolicyID varchar(100), 29: Status varchar(100)) 30:  31: --add some data 32: insert into ExampleStag( 33: PolicyID, 34: Status) 35: select 36: 1, 37: 'Lapsed' 38: union all 39: select 40: 2, 41: 'Quote' 42:  43: select * 44: from ExampleFactUpdate 45:  46: select * 47: from ExampleStag 48:  49:  50: INSERT INTO ExampleFactUpdate 51: (PolicyID, 52: Status) 53: SELECT -- these columns are returned from the output statement 54: PolicyID, 55: Status 56: FROM 57: ( 58: -- merge statement on unique id in this case Policy_ID 59: MERGE dbo.ExampleFactUpdate dp 60: USING dbo.ExampleStag s 61: ON dp.PolicyID = s.PolicyID 62: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 63: INSERT (PolicyID,Status) 64: VALUES (s.PolicyID, s.Status) 65: WHEN MATCHED --if it already exists 66: AND ExpiryDate IS NULL -- and the Expiry Date is null 67: THEN 68: UPDATE 69: SET 70: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 71: dp.Active = 0 -- and deactivate the existing record 72: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 73: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 74: WHERE -- we'll filter using a where clause 75: MergeAction = 'Update'; -- here 76:  77:  78: select * 79: from ExampleFactUpdate 80: 

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • How can I get the data from the json one by one by using javascript/jquery? [on hold]

    - by sandhus
    I have the working code which fetches all the records from the json, but how can I make it available one by one on the click of the button or link? The following code is working to fetch all the records: <!doctype html> <html> <head> <meta charset="utf-8"> <title>jQuery PHP Json Response</title> <style type="text/css"> div { text-align:center; padding:10px; } #msg { width: 500px; margin: 0px auto; } .members { width: 500px ; background-color: beige; } </style> </head> <body> <div id="msg"> <table id="userdata" border="1"> <thead> <th>Email</th> <th>Sex</th> <th>Location</th> <th>Picture</th> <th>audio</th> <th>video</th> </thead> <tbody></tbody> </table> </div> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"> </script> <script type="text/javascript"> $(document).ready(function(){ var url="json.php"; $("#userdata tbody").html(""); $.getJSON(url,function(data){ $.each(data.members, function(i,user){ var tblRow = "<tr>" +"<td>"+user.email+"</td>" +"<td>"+user.sex+"</td>" +"<td>"+user.location+"</td>" +"<td>"+"<img src="+user.image+">"+"</td>" +"<td>"+"<audio src="+user.video+" controls>"+"</td>" +"<td>"+"<video src="+user.video+" controls>"+"</td>" +"</tr>" ; $(tblRow).appendTo("#userdata tbody"); }); }); }); </script> </body> </html> I used the json_encode function in the php file to encode the sql db. How can i achieve this?

    Read the article

  • return the result of a query and the total number of rows in a single function

    - by csotelo
    This is a question as might be focused on working in the best way, if there are other alternatives or is the only way: Using Codeigniter ... I have the typical 2 functions of list records and show total number of records (using the page as an alternative). The problem is that they are rather large. Sample 2 functions in my model: count Rows: function get_all_count() { $this->db->select('u.id_user'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); $query = $this->db->get(); return $query->num_rows(); } results per page function get_all($limit, $offset, $sort = '') { $this->db->select('u.id_user, user, email, flag'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); if($sort) $this->db->order_by($sort); $this->db->limit($limit, $offset); $query = $this->db->get(); return $query->result(); } You see, I repeat the most of the functions, the difference is that only the number of fields and management pages. I wonder if there is any alternative to get as much results as the query in a single function. I have seen many tutorials, and all create 2 functions: one to count and another to show results ... Will there be more optimal?

    Read the article

  • Using MVVM, how to pass SelectedItems of a XamDataGrid as parameter to the Command raised by the Co

    - by saddaypally
    Hi, I'm trying to pass the item on XamDataGrid on which I do a mouse right click to open a ContextMenu, which raises a Command in my ViewModel. Somehow the method that the Command calls is not reachable in debug mode. This is the snipped from the view <ig:XamDataGrid DataSource="{Binding DrdResults}" Height="700" Width="600"> <ig:XamDataGrid.ContextMenu> <ContextMenu DataContext="{Binding RelativeSource={RelativeSource Mode=Self}, Path=PlacementTarget.DataContext}" AllowDrop="True" Name="cmAudit"> <MenuItem Header="View History" Command="{Binding ViewTradeHistory}" CommandParameter="{Binding Path=SelectedItems}"> </MenuItem> </ContextMenu> </ig:XamDataGrid.ContextMenu> <ig:XamDataGrid.FieldSettings> <ig:FieldSettings AllowFixing="NearOrFar" AllowEdit="False" Width="auto" Height="auto" /> </ig:XamDataGrid.FieldSettings> </ig:XamDataGrid> My code in the corresponding ViewModel for this View is as follows public WPF.ICommand ViewTradeHistory { get { if (_viewTradeHistory == null) { _viewTradeHistory = new DelegateCommand( (object SelectedItems) = { this.OpenTradeHistory(SelectedItems); }); } return _viewTradeHistory; } } And lastly the actual method that gets called by the Command is as below private void OpenTradeHistory(object records) { DataPresenterBase.SelectedItemHolder auditRecords = (DataPresenterBase.SelectedItemHolder)records; // Do something with the auditRecords now. } I'm not sure what am I doing incorrectly here. Any help will be very much appreciated. Thanks, Shravan

    Read the article

  • bulk insert and update with ADO.NET Entity Framework

    - by Keith Barrows
    I am writing a small application that does a lot of feed processing. I want to use LINQ EF for this as speed is not an issue, it is a single user app and, in the end, will only be used once a month. My questions revolves around the best way to do bulk inserts using LINQ EF. After parsing the incoming data stream I end up with a List of values. Since the end user may end up trying to import some duplicate data I would like to "clean" the data during insert rather than reading all the records, doing a for loop, rejecting records, then finally importing the remainder. This is what I am currently doing: DateTime minDate = dataTransferObject.Min(c => c.DoorOpen); DateTime maxDate = dataTransferObject.Max(c => c.DoorOpen); using (LabUseEntities myEntities = new LabUseEntities()) { var recCheck = myEntities.ImportDoorAccess.Where(a => a.DoorOpen >= minDate && a.DoorOpen <= maxDate).ToList(); if (recCheck.Count > 0) { foreach (ImportDoorAccess ida in recCheck) { DoorAudit da = dataTransferObject.Where(a => a.DoorOpen == ida.DoorOpen && a.CardNumber == ida.CardNumber).First(); if (da != null) da.DoInsert = false; } } ImportDoorAccess newIDA; foreach (DoorAudit newDoorAudit in dataTransferObject) { if (newDoorAudit.DoInsert) { newIDA = new ImportDoorAccess { CardNumber = newDoorAudit.CardNumber, Door = newDoorAudit.Door, DoorOpen = newDoorAudit.DoorOpen, Imported = newDoorAudit.Imported, RawData = newDoorAudit.RawData, UserName = newDoorAudit.UserName }; myEntities.AddToImportDoorAccess(newIDA); } } myEntities.SaveChanges(); } I am also getting this error: System.Data.UpdateException was unhandled Message="Unable to update the EntitySet 'ImportDoorAccess' because it has a DefiningQuery and no element exists in the element to support the current operation." Source="System.Data.SqlServerCe.Entity" What am I doing wrong? Any pointers are welcome.

    Read the article

  • C# StreamReader.ReadLine() - Need to pick up line terminators

    - by Tony Trozzo
    I wrote a C# program to read an Excel .xls/.xlsx file and output to CSV and Unicode text. I wrote a separate program to remove blank records. This is accomplished by reading each line with StreamReader.ReadLine(), and then going character by character through the string and not writing the line to output if it contains all commas (for the CSV) or all tabs (for the Unicode text). The problem occurs when the Excel file contains embedded newlines (\x0A) inside the cells. I changed my XLS to CSV converter to find these new lines (since it goes cell by cell) and write them as \x0A, and normal lines just use StreamWriter.WriteLine(). The problem occurs in the separate program to remove blank records. When I read in with StreamReader.ReadLine(), by definition it only returns the string with the line, not the terminator. Since the embedded newlines show up as two separate lines, I can't tell which is a full record and which is an embedded newline for when I write them to the final file. I'm not even sure I can read in the \x0A because everything on the input registers as '\n'. I could go character by character, but this destroys my logic to remove blank lines. Any ideas would be greatly appreciated.

    Read the article

  • Common Table Expressions slow when using a table variable

    - by Phil Haselden
    I have been experimenting with the following (simplified) CTE. When using a table variable () the query runs for minutes before I cancel it. Any of the other commented out methods return in less than a second. If I replace the whole WHERE clause with an INNER JOIN it is fast as well. Any ideas why using a table variable would run so slowly? FWIW: The database contains 2.5 million records and the inner query returns 2 records. CREATE TABLE #rootTempTable (RootID int PRIMARY KEY) INSERT INTO #rootTempTable VALUES (1360); DECLARE @rootTableVar TABLE (RootID int PRIMARY KEY); INSERT INTO @rootTableVar VALUES (1360); WITH My_CTE AS ( SELECT ROW_NUMBER() OVER(ORDER BY d.DocumentID) rownum, d.DocumentID, d.Title FROM [Document] d WHERE d.LocationID IN ( SELECT LocationID FROM Location JOIN @rootTableVar rtv ON Location.RootID = rtv.RootID -- VERY SLOW! --JOIN #rootTempTable tt ON Location.RootID = tt.RootID -- Fast --JOIN (SELECT 1360 as RootID) AS rt ON Location.RootID = rt.RootID -- Fast --WHERE RootID = 1360 -- Fast ) ) SELECT * FROM My_CTE WHERE (rownum > 0) AND (rownum <= 100) ORDER BY rownum

    Read the article

  • 550 5.7.1 our Bulk Email Senders Guidelines

    - by darkandcold
    Hello, I moved to a new server with integrated merak mail. since I forgot merak(used before), i missed to configure security options. So before I realize the issue, my smtp server was used by spammers for 2 days, hundreds times I configured security settings as what it should be (at least i hope) then i realize, i can't send e-mail to gmail or hotmail. gmail says "550 5.7.1 see our Bulk Email Senders Guidelines" yes, I saw its guidelines. first, I creat SPF records via msn senderID wizard. it completed very well. then as gmail says, I created DKIM dns records. (in Merak mail, there is option to create DKIM, i created it and save it to DNS as TXT record) it's been 2 days after doing the fixes above. but still i can't send mail gmail or hotmail. (P.S, some of my domain in merak is forwarded to my gmail account i mean when a mail arrives to [email protected] it will forward to my gmail. and the surprise it, merak mail can forward them very vell :S but can't still send mail from outlook etc.)

    Read the article

  • StackOverflow in VB.NET SQLite query

    - by Majgel
    I have an StackOverflowException in one of my DB functions that I don't know how to deal with. I have a SQLite database with one table "tblEmployees" that holds records for each employees (several thousand posts) and usually this function runs without any problem. But sometimes after the the function is called a thousand times it breaks with an StackOverflowException at the line "ReturnTable.Load(reader)" with the message: An unhandled exception of type 'System.StackOverflowException' occurred in System.Data.SQLite.dll If I restart the application it has no problem to continue with the exact same post it last crashed on. I can also make the exactly same DB-call from SQLite Admin at the crash time without no problems. Here is the code: Public Function GetNextEmployeeInQueue() As String Dim NextEmployeeInQueue As String = Nothing Dim query As [String] = "SELECT FirstName FROM tblEmployees WHERE Checked=0 LIMIT 1;" Try Dim ReturnTable As New DataTable() Dim mycommand As New SQLiteCommand(cnn) mycommand.CommandText = query Dim reader As SQLiteDataReader = mycommand.ExecuteReader() ReturnTable.Load(reader) reader.Close() If ReturnTable.Rows.Count > 0 Then NextEmployeeInQueue = ReturnTable.Rows(0)("FirstName").ToString() Else MsgBox("No more employees found in queue") End If Catch fail As Exception MessageBox.Show("Error: " & fail.Message.ToString()) End Try If NextEmployeeInQueue IsNot Nothing Then Return NextEmployeeInQueue Else Return "No more records in queue" End If End Function When crashes, the reader has "Property evaluation failed." in all values. I assume there is some problem with allocated memory that isn't released correctly, but can't figure out what object it's all about. The DB-connection opens when the DB-class object is created and closes on main form Dispose. Should I maybe dispose the mycommand object every time in a finally block? But wouldn't that result in a closed DB-connection?

    Read the article

  • Problem using FtpWebRequest to append to file on a mainframe

    - by MusiGenesis
    I am using FtpWebRequest to append data to a mainframe file. Each record appended is 50 characters long, and I am adding them one record at a time. In our development environment, we do not have a mainframe, so my code was written and tested FTPing to a Windows-based FTP site instead of a mainframe. Initially, I was writing each record using a StreamWriter (using the stream from the FtpWebRequest) and writing each record using WriteLine (which automatically adds a CR/LF to the end). When we ran this for the first time in the test environment (in which we're writing to an actual MVS mainframe), our mainframe contact said the CR/LFs were not able to be read by his program (a green-screen mainframe program of some sort - he's sent me screen captures, which is all I know of it). I changed our code to use Write instead of WriteLine, but now my code executes successfully (i.e no thrown exceptions) when writing multiple records, but no matter how many records we append, he is only able to "see" the first record - according to his mainframe program, there is only one 50-character record in the file. I'm guessing that to fix this, I need to write some other line-delimiting character into the end of the stream (instead of CR/LF) that the mainframe will recognize as a record delimiter. Anybody know what this is, or how else I can fix this problem?

    Read the article

  • BizTalk Business Rules Engine - Repeating Elements Question

    - by Andrew Cripps
    Hello I'm trying to create what I think should be a relatively simple business rule to operate over repeating elements in an XML schema. Consider the following XML snippet (this is simplified with namespaces removed, for readability): <Root> <AllAccounts> <Account id="1" currentPayment="10.00" arrearsAmount="25.00"> <AllCustomers> <Customer id="20" primary="true" canSelfServe="false" /> <Customer id="21" primary="false" canSelfServe="false" /> </AllCustomers> </Account> <Account id="2" currentPayment="10.00" arrearsAmount="15.00"> <AllCustomers> <Customer id="30" primary="true" canSelfServe="false" /> <Customer id="31" primary="false" canSelfServe="false" /> </AllCustomers> </AllAccounts> </Root> What I want to do is to have two rules: Set /Root/AllAccounts/Account[x]/AllCustomers/Customer[primary='true']/canSelfServe = true IF arrearsAmount < currentPayment Set /Root/AllAccounts/Account[x]/AllCustoemrs/Customer[primary='true']/canSelfServer = false IF arrearsAmount = currentPayment Where [x] is 0...number of /Root/AllAccounts/Account records present in the XML. I've tried two simple rules for this, and each rule seems to fire x * x times, where x is the number of Account records in the XML. I only want each rule to fire once for each Account record. Any help greatly appreciated! Thanks Andrew

    Read the article

  • mongodb: insert if not exists

    - by LeMiz
    Hello, Every day, I receive a stock of documents (an update). What I want to do is inserting each of them if it does not exists. I also want to keep track of the first time I inserted them, and the last time I saw them in an update. I don't want to have duplicate documents. I don't want to remove a document which has previously been saved, but is not in my update. 95% (estimated) of the records are unmodified from day to day. I am using the python driver (pymongo), for that matter. What I currently do is (pseudo-code): for each document in update: existing_document = collection.find_one(document) if not existing_document: document['insertion_date'] = now else: document = existing_document document['last_update_date'] = now my_collection.save(document) My problem is that it is very slow (40 mins for less than 100 000 records, and I have millions of them in the update). I am pretty sure there is something builtin for doing this, but the document for update() is mmmhhh.... a bit terse.... ( http://www.mongodb.org/display/DOCS/Updating ) Can someone give an advice on doing it faster ?

    Read the article

  • Implement DDD and drawing the line between the an Entity and value object

    - by William
    I am implementing an EMR project. I would like to apply a DDD based approach to the problem. I have identified the "Patient" as being the core object of the system. I understand Patient would be an entity object as well as an aggregrate. I have also identified that every patient must have a "Doctor" and "Medical Records". The medical records would encompass Labs, XRays, Encounter.... I believe those would be entity objects as well. Let us take a Encounter for example. My implementation currently has a few fields as "String" properties, which are the complaint, assessment and plan. The other items necessary for an Encounter are vitals. I have implemented vitals as a value object. Given that it will be necessary to retrieve vitals without haveing to retrieve each Encounter then do vitals become part of the Encounter aggregate and patient aggregrate. I am assuming I could view the Encounter as an aggregrate, because other items are spwaned from the Encounter like prescriptions, lab orders, xrays. Is approach right that I am taking in identifying my entities and aggregates. In the case of vitals, they are specific to a patient, but outside of that there is not any other identity associated with them.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >