Search Results

Search found 5527 results on 222 pages for 'unique constraint'.

Page 41/222 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Use MySQL trigger to update another table when duplicate key found

    - by Jason
    Been scratching my head on this one, hoping one of you kind people and direct me towards solving this problem. I have a mysql table of customers, it contains a lot of data, but for the purpose of this question, we only need to worry about 4 columns 'ID', 'Firstname', 'Lastname', 'Postcode' Problem is, the table contains a lot of duplicated customers. A new table is being created where each customer is unique and for us, we decide a unique customer is based on 'Firstname', 'Lastname' and 'Postcode' However, (this is the important bit) we need to ensure each new "unique" customer record also can be matched to the original multiple entries of that customer in the original table. I believe the best way to do this is to have a third table, that has 'NewUniqueID', 'OldCustomerID'. So we can search this table for 'NewUniqueID' = '123' and it would return multiple 'OldCustomerID' values where appropriate. I am hoping to make this work using a trigger and the on duplicate key syntax. So what would happen is as follows: An query is run taking the old customer table and inserting it in to the new unique table. (A standard Insert Select query) On duplicate key continue adding records, but add one entry in to the third table noting the 'NewUniqueID' that duped along with the 'OldCustomerID' of the record we were trying to insert. Hope this makes sense, my apologies if it isn't clear. I welcome and appreciate any thoughts on this one! Many thanks Jason

    Read the article

  • JPA - Real primary key generated ID for references

    - by Val
    I have ~10 classes, each of them, have composite key, consist of 2-4 values. 1 of the classes is a main one (let's call it "Center") and related to other as one-to-one or one-to-many. Thinking about correct way of describing this in JPA I think I need to describe all the primary keys using @Embedded / @PrimaryKey annotations. Question #1: My concern is - does it mean that on the database level I will have # of additional columns in each table referring to the "Center" equal to number of column in "Center" PK? If yes, is it possible to avoid it by using some artificial unique key for references? Could you please give an idea how real PK and the artificial one needs to be described in this case? Note: The reason why I would like to keep the real PK and not just use the unique id as PK is - my application have some data loading functionality from external data sources and sometimes they may return records which I already have in local database. If unique ID will be used as PK - for new records I won't be able to do data update, since the unique ID will not be available for just downloaded ones. At the same time it is normal case scenario for application and it just need to update of insert new records depends on if the real composite primary key matches. Question #2: All of the 10 classes have common field "date" which I described in an abstract class which each of them extends. The "date" itself is never a key, but it always a part of composite key for each class. Composite key is different for each class. To be able to use this field as a part of PK should I describe it in each class or is there any way to use it as is? I experimented with @Embedded and @PrimaryKey annotations and always got an error that eclipselink can't find field described in an abstract class. Thank you in advance! PS. I'm using latest version of eclipselink & H2 database.

    Read the article

  • Proper way to communicate between divs in jquery?

    - by folder
    This is probably a simple question, and i'm just being dense. I've looked through a few jquery books and nothing has jumped out at me, i'm probably missing something. I'm looking for the 'proper', best practices way to communicate between divs/dom items on a page? For example, I have a page with 5 panels that display when a link is chosen, they hide/show/run some code that changes other pieces on the page. Something like this snippet: <ul> <li><div id="unique_name_1_anchor">Unique div 1</div></li> <li><div id="unique_name_2_anchor">Unique div 2</div></li> <li><div id="unique_name_3_anchor">Unique div 3</div></li> <li><div id="unique_name_4_anchor">Unique div 4</div></li> </ul> ...Somewhere else on the page <div id="unique_name_1_panel">Some panel 1 stuff here</div> <div id="unique_name_2_panel">Some panel2 stuff here<div> <div id="unique_name_3_panel">Some panel3 here</div> <div id="unique_name_4_panel">Some panel4 here</div> The concept being when as user clicks on a unique_name_X_anchor div, some action is performed on the corresponding panel (ie show/hide etc...). What I have been doing now is parsing the id ie ($(this).replace("_anchor","_panel") to get the div id of the other dom element. This just seems clunky and there must be a better/more proper way of doing this. Suggestions? Thanks

    Read the article

  • How to tell the Session to throw the error query[NHibernate]?

    - by xandy
    I made a test class against the repository methods shown below: public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File { try { _session.Save(FileToAdd); _session.Flush(); } catch (Exception e) { if (e.InnerException.Message.Contains("Violation of UNIQUE KEY")) throw new ArgumentException("Unique Name must be unique"); else throw e; } } public void RemoveFile(File FileToRemove) { _session.Delete(FileToRemove); _session.Flush(); } And the test class: try { Data.File crashFile = new Data.File(); crashFile.UniqueName = "NonUniqueFileNameTest"; crashFile.Extension = ".abc"; repo.AddFile(crashFile); Assert.Fail(); } catch (Exception e) { Assert.IsInstanceOfType(e, typeof(ArgumentException)); } // Clean up the file Data.File removeFile = repo.GetFiles().Where(f => f.UniqueName == "NonUniqueFileNameTest").FirstOrDefault(); repo.RemoveFile(removeFile); The test fails. When I step in to trace the problem, I found out that when I do the _session.flush() right after _session.delete(), it throws the exception, and if I look at the sql it does, it is actually submitting a "INSERT INTO" statement, which is exactly the sql that cause UNIQUE CONSTRAINT error. I tried to encapsulate both in transaction but still same problem happens. Anyone know the reason?

    Read the article

  • The Windows Browser Ballot Screen Offers Web Browser Choice to European Users

    - by Matthew Guay
    Since March, our friends across the pond in Europe get to decide which browser they want to install with their Windows OS. Today we thought we would take a look at the ballot choices, some are well known, and others you may not have heard of. Windows users in European countries should start seeing the so called “Browser Ballot Screen” after installing the Windows Update KB976002 (link below). The browser ballot offers a dozen different browsers, including some you’ve likely never heard of.  They each have some unique features, and are all free, and here we take a quick look at each of them. Internet Explorer 8 Internet Explorer is the world’s most used web browser, as it’s bundled with Windows. It also includes several unique features, including Accelerators that make it easy to search or find a map of a location, and InPrivate filtering to directly control what sites can get personal information.  Additionally, it offers great integration with Windows Touch and the new taskbar in Windows 7. IE 8 runs on Windows XP and newer, and is bundled with Windows 7. Mozilla Firefox 3.6 Firefox is the most popular browser other than Internet Explorer.  It is the modern descendant of Netscape, and is loved by web developers for its adherence to web standards, openness, and expandability.  It offers thousands of Add-ons and themes to let you customize it to fit your preferences. The most recent version has added Personas, which are quick, lightweight themes to let you personalize the look your browser. It’s open source, and runs on all modern versions of Windows, Mac OS X, and Linux. Of course thanks to Asian Angel, our resident browser expert, you can check out several articles regarding this popular IE alternative. Google Chrome 4 Google Chrome has gained an impressive amount of market share during its short time in the market. It offers a minimalistic interface and fast speeds with intensive web applications. The address bar is also a search bar, so you can enter a search query or web address and quickly get the information you need. With version 4 you can add a growing number of extensions, personalize it with a variety of stylish themes, and automatically translate foreign websites into your own language. Opera 10.50 Although Opera has been around for over a decade, relatively few users have used it. With the new 10.50 release, Opera has many unique features packed in a sleek UI. It integrates great with Aero and the Windows 7 taskbar, and lets you preview the contents of your websites in the tab bar. It also includes Opera Unite, a small personal web server to make file sharing easy, Opera Turbo to speed up your internet when the connection is slow, and Opera Link to keep all your copies of Opera in sync. It’s a popular browser on many mobile devices, and version 10.50 has a lot of enhancements. Apple Safari 4 Safari is the default browser in Mac OS X, and starting with version 3 it has been available for Windows as well. It’s based on Webkit, the popular new rendering engine that provides great speed and standards compatibility.  Safari 4 lets you browse your browsing history in a unique Coverflow interface, and shows your Top Sites in a fancy, 3D interface.  It’s also great for viewing mobile websites for the iPhone and other mobile devices through Developer Tools. Flock 2.5 Based on the popular Firefox core, Flock brings a multitude of social features to your browsing experience. You can view the latest YouTube videos, Flickr pictures, update your favorite social network, and keep up with your webmail thanks to It’s integration with a wide variety of services. You can even post to your blog through the integrated blog editor. If your time online is mostly spent in social services, this may be a browser you want to check out. Maxthon 2.5 Maxthon is a unique browser that builds on Internet Explorer to bring more features with IE’s rendering. Formerly known as MyIE2, Maxthon was popular for bringing tabbed browsing with IE rendering during the days of IE 6.  Today Maxthon supports a wide range of plugins and skins, so you can customize it however you want. It includes mouse gestures, a web accelerator to speed up pokey internet connections, a content blocker to remove unwanted content from sites, an online account to backup your favorites, and a nice download manager. Avant Browser Another nice browser based on Internet Explorer, Avant brings a wide variety of features in a nice brushed-metal interface. It includes an integrated AutoFill for forms, mouse gestures, customizable skins, and privacy protection features. It also includes a Flash blocker that will only load flash in webpages when you select them. You can also integrate Avant with an online account to store your bookmarks, feeds, settings and passwords online. Sleipnir Sleipnir is a customizable browser meant for advance users that is quite popular in Japan. It’s built on the Trident engine and virtually every aspect of is customizable unlike Internet Explorer.   FlashPeak SlimBrowser SlimBrowser from FlashPeak incorporates a lot of features like Popup Killer, Auto Login, site filtering and more. It’s based on Internet Explorer but offers a lot more customizable options out of the box.   K-meleon This basic browser is light on system resources and based on the Gecko engine. It’s been in development for years on SourceForge, and if you like to tweak virtually any aspect of your browser, this might be a good choice for you.   GreenBrowser GreenBrowser is based on Internet Explorer and is available in several languages. It has a large amount of features out of the box and is light on system resources.   Conclusion The European Union asked for more choices in the web browser they could choose from when installing Windows, and with the Browser Ballot Screen, they certainly get a variety to choose from.  If you’ve tried out some of the lesser known browsers, or think some important ones have been left out, leave a comment and tell us about it. Learn More About the Browser Ballot Screen and Download Alternatives to IE Windows Update KB976002 Similar Articles Productive Geek Tips Set the Default Browser on Ubuntu From the Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedView Hidden Files and Folders in Ubuntu File BrowserSet the Default Browser and Email Client in UbuntuAccess Multiple Browsers from Firefox with Browser View Plus TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • 14+ WordPress Portfolio Themes

    - by Edward
    There are various portfolio themes for WordPress out there, with this collection we are trying to help you choose the best one. These themes can be used to create any type of personal, photography, art or corporate portfolio. Display 3 in 1 Display 3 in 1 – Business & Portfolio WordPress Theme. Features a fantastic 3D Image slideshow that can be controlled from your backend with a custom tool. The Theme has a huge wordpress custom backend (8 additional Admin Pages) that make customization of the Theme easy for those who dont know much about coding or wordpress. Price: $40 View Demo Download DeepFocus Tempting features such as automatic separation of blog and portfolio content by template, publishing of most important information on homepage, styles to choose from and many more such features. It also provides for page templates for blog, portfolio, blog archive, tags etc. It has the best feature that helps you to manage everything from one place. Price: $39 (Package includes more than 55 themes) View Demo Download SimplePress Simple, yet awesome. One of the best portfolio theme. Price: $39 (Package includes more than 55 themes) View Demo Download Graphix Graphix is one of best word press portfolio themes. It is most suited to aspiring designers, developers, artists and photographers who’d like a framework theme, which has a great-looking portfolio with a feature-rich blog. It has theme option page, 5-color style, SEO option, featured content blocks, drop down multi-level menu, social profile link custom widgets, custom post, custom page template etc. Price: $69 Single & $149 Developer Package View Demo Download Bizznizz It boasts of many features such as custom homepage, custom post types, custom widgets, portfolio templates, alternative styles and many more. View Demo Download Showtime Ultimate WordPress Theme for you to create your web portfolio, It has 3 different styles for you to choose from. Price: $40 View Demo Download Montana WP Horizontal Portfolio Theme Montana Theme – WP Horizontal Portfolio Theme, best suited for creative studios to showcase design, photography, illustration, paintings and art. Price: $30 View Demo Download OverALL OverALL Premium WordPress Blog & Portfolio Theme, is low priced & has amazing tons of features. Price: $17 View Demo Download Habitat Habitat – Blog and Portfolio Theme. Unique Portfolio Sorting/Filtering with a custom jQuery script (each entry supports multiple images or a video) Multiple Featured Images for each post to generate individual Slideshows per Post, or the option to directly embed video content from youtube, vimeo, hulu etc. Price: $35 View Demo Download Fresh Folio Fresh Folio from WooThemes, can be used as both portfolio and a premium WordPress theme. The theme is a remix of the Fresh News Theme and Proud Folio Theme which combines all the best elements of the respective blog and portfolio style themes. View Demo Download Fresh Folio Features: Can be used to create an impressive portfolio. 7 diverse theme styles to choose from (default, blue, red, grunge light, grunge floral, antique, blue creamer, nightlife) The template will automatically (visually) separate your blog & portfolio content, making this an amazing theme for aspiring designers, developers, artists, photographers etc. Unique page templates types for the portfolio, blog, blog archives, tags & search results. Integrated Theme Options (for WordPress) to tweak the layout, colour scheme etc. for the theme Optional Automatic Image Resize, which is used to dynamically create the thumbnails and featured images Includes Widget enabled Sidebars. eGallery eGallery is a theme made to transform your wordpress blog into a fully functional online portfolio. Theme is perfectly designed to emphasize the artwork you choose to showcase. The design has been greatly enhanced using javascript, and is easy to implement. Price: $39 (Package includes more than 55 themes) View Demo Download ProudFolio ProudFolio is a portfolio premium WordPress theme from Woo Themes. The theme is for designers, developers, artists and photographers who would like a showcase theme which would depict as a portfolio and also serves a purpose of blog. ProudFolio puts a strong emphasis on the portfolio pieces, allowing for decent-sized thumbnails, huge fullscreen views via Lightbox, and full details on the single page. The theme file also contains a choice of three different background images and color schemes. Price: $70 Single $150 Developer License View Demo Download Features: The template will automatically (visually) separate your blog & portfolio content. An unique homepage layout, which publishes only the most important information; Unique page templates for the portfolio, blog, blog archives, tags & search results. Integrated Theme Options (for WordPress) to tweak the layout, colour scheme etc. for the theme; Built-in video panel, which you can use to publish any web-based Flash videos; Automatic Image Resize, which is used to dynamically create the thumbnails and featured images; Custom Page Templates for Archives, Sitemap & Image Gallery; Built-in Gravatar Support for Authors & Comments; Integrated Banner Management script to display randomized banner ads of your choice site-wide; Pretty drop down navigation everywhere; and Widget Enabled Sidebars. Porftolio WordPress Theme A FREE wordpress theme designed for web portfolios and (for now) just for web portfolios. It is coming with an Administrative Panel from where you can edit the head quote text, you can edit all theme colors, font families, font sizes and you can fill a curriculum vitae and display it into a special page. Theme demo and download can be found here Viz | Biz Viz | Biz is a premium WordPress photo gallery and portfolio theme designed specifically for photographers, graphic designers and web designers who want to display their creative work online, market their services, as well as have a typical text blog, using the power and flexibility of WordPress. It is priced for $79.95. Theme Features: Premium quality portfolio template Custom logo uploader to replace the standard graphic with your own unique look from the WP Dashboard Integrated blog component (front images are custom fields and thumbnails, but you can also have a typical blog) Four tabbed feature areas (About Me, Services, Recent Posts, and Tags) Two home page feature photos (You choose which photos to feature using a WP category) Manage your online portfolio through the WordPress CMS Crop two sizes of your work: One for the front page thumbnails and another full size version and upload to WP Search engine optimized. Related posts:14 WordPress Photo Blog & Portfolio Themes 6 PhotoBlog Portfolio WordPress Themes Professional WordPress Business Themes

    Read the article

  • Design for complex ATG applications

    - by Glen Borkowski
    Overview Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.  The real complex applications have to support multiple sites, multiple languages, multiple catalogs, multiple currencies, a couple different development teams, multiple business teams, and a highly complex business model (and processes to go along with it).  While it's still important to implement a proper design for simple applications, it's absolutely critical to do this for the complex applications.  Why?  It's all about time and money.  If you are unable to manage your complex applications in an efficient manner, the cost of managing it will increase dramatically as will the time to get things done (time to market).  On the positive side, your competition is most likely in the same situation, so you just need to be more efficient than they are. This article is intended to discuss a number of key areas to think about when designing complex applications on ATG.  Some of this can get fairly technical, so it may help to get some background first.  You can get enough of the required background information from this post.  After reading that, come back here and follow along. Application Design Of all the various types of ATG applications out there, the most complex tend to be the ones in the telecommunications industry - especially the ones which operate in multiple countries.  To get started, let's assume that we are talking about an application like that.  One that has these properties: Operates in multiple countries - must support multiple sites, catalogs, languages, and currencies The organization is fairly loosely-coupled - single brand, but different businesses across different countries There is some common functionality across all sites in all countries There is some common functionality across different sites within the same country Sites within a single country may have some unique functionality - relative to other sites in the same country Complex product catalog (mostly in terms of bundles, eligibility, and compatibility) At this point, I'll assume you have read through the required reading and have a decent understanding of how ATG modules work... Code / configuration - assemble into modules When it comes to defining your modules for a complex application, there are a number of goals: Divide functionality between the modules in a way that maps to your business Group common functionality 'further down in the stack of modules' Provide a good balance between shared resources and autonomy for countries / sites Now I'll describe a high level approach to how you could accomplish those goals...  Let's start from the bottom and work our way up.  At the very bottom, you have the modules that ship with ATG - the 'out of the box' stuff.  You want to make sure that you are leveraging all the modules that make sense in order to get the most value from ATG as possible - and less stuff you'll have to write yourself.  On top of the ATG modules, you should create what we'll refer to as the Corporate Foundation Module described as follows: Sits directly on top of ATG modules Used by all applications across all countries and sites - this is the foundation for everyone Contains everything that is common across all countries / all sites Once established and settled, will change less frequently than other 'higher' modules Encapsulates as many enterprise-wide integrations as possible Will provide means of code sharing therefore less development / testing - faster time to market Contains a 'reference' web application (described below) The next layer up could be multiple modules for each country (you could replace this with region if that makes more sense).  We'll define those modules as follows: Sits on top of the corporate foundation module Contains what is unique to all sites in a given country Responsible for managing any resource bundles for this country (to handle multiple languages) Overrides / replaces corporate integration points with any country-specific ones Finally, we will define what should be a fairly 'thin' (in terms of functionality) set of modules for each site as follows: Sits on top of the country it resides in module Contains what is unique for a given site within a given country Will mostly contain configuration, but could also define some unique functionality as well Contains one or more web applications The graphic below should help to indicate how these modules fit together: Web applications As described in the previous section, there are many opportunities for sharing (minimizing costs) as it relates to the code and configuration aspects of ATG modules.  Web applications are also contained within ATG modules, however, sharing web applications can be a bit more difficult because this is what the end customer actually sees, and since each site may have some degree of unique look & feel, sharing becomes more challenging.  One approach that can help is to define a 'reference' web application at the corporate foundation layer to act as a solid starting point for each site.  Here's a description of the 'reference' web application: Contains minimal / sample reference styling as this will mostly be addressed at the site level web app Focus on functionality - ensure that core functionality is revealed via this web application Each individual site can use this as a starting point There may be multiple types of web apps (i.e. B2C, B2B, etc) There are some techniques to share web application assets - i.e. multiple web applications, defined in the web.xml, and it's worth investigating, but is out of scope here. Reference infrastructure In this complex environment, it is assumed that there is not a single infrastructure for all countries and all sites.  It's more likely that different countries (or regions) could have their own solution for infrastructure.  In this case, it will be advantageous to define a reference infrastructure which contains all the hardware and software that make up the core environment.  Specifications and diagrams should be created to outline what this reference infrastructure looks like, as well as it's baseline cost and the incremental cost to scale up with volume.  Having some consistency in terms of infrastructure will save time and money as new countries / sites come online.  Here are some properties of the reference infrastructure: Standardized approach to setup of hardware Type and number of servers Defines application server, operating system, database, etc... - including vendor and specific versions Consistent naming conventions Provides a consistent base of terminology and understanding across environments Defines which ATG services run on which servers Production Staging BCC / Preview Each site can change as required to meet scale requirements Governance / organization It should be no surprise that the complex application we're talking about is backed by an equally complex organization.  One of the more challenging aspects of efficiently managing a series of complex applications is to ensure the proper level of governance and organization.  Here are some ideas and goals to work towards: Establish a committee to make enterprise-wide decisions that affect all sites Representation should be evenly distributed Should have a clear communication procedure Focus on high level business goals Evaluation of feature / function gaps and how that relates to ATG release schedule / roadmap Determine when to upgrade & ensure value will be realized Determine how to manage various levels of modules Who is responsible for maintaining corporate / country / site layers Determine a procedure for controlling what goes in the corporate foundation module Standardize on source code control, database, hardware, OS versions, J2EE app servers, development procedures, etc only use tested / proven versions - this is something that should be centralized so that every country / site does not have to worry about compatibility between versions Create a innovation team Quickly develop new features, perform proof of concepts All teams can benefit from their findings Summary At this point, it should be clear why the topics above (design, governance, organization, etc) are critical to being able to efficiently manage a complex application.  To summarize, it's all about competitive advantage...  You will need to reduce costs and improve time to market with the goal of providing a better experience for your end customers.  You can reduce cost by reducing development time, time allocated to testing (don't have to test the corporate foundation module over and over again - do it once), and optimizing operations.  With an efficient design, you can improve your time to market and your business will be more flexible  and agile.  Over time, you'll find that you're becoming more focused on offering functionality that is new to the market (creativity) and this will be rewarded - you're now a leader. In addition to the above, you'll realize soft benefits as well.  Your staff will be operating in a culture based on sharing.  You'll want to reward efforts to improve and enhance the foundation as this will benefit everyone.  This culture will inspire innovation, which can only lend itself to your competitive advantage.

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SINGLE SIGN ON SECURITY THREAT! FACEBOOK access_token broadcast in the open/clear

    - by MOKANA
    Subsequent to my posting there was a remark made that this was not really a question but I thought I did indeed postulate one. So that there is no ambiquity here is the question with a lead in: Since there is no data sent from Facebook during the Canvas Load process that is not at some point divulged, including the access_token, session and other data that could uniquely identify a user, does any one see any other way other than adding one more layer, i.e., a password, sent over the wire via HTTPS along with the access_toekn, that will insure unique untampered with security by the user? Using Wireshark I captured the local broadcast while loading my Canvas Application page. I was hugely surprised to see the access_token broadcast in the open, viewable for any one to see. This access_token is appended to any https call to the Facebook OpenGraph API. Using facebook as a single click log on has now raised huge concerns for me. It is stored in a session object in memory and the cookie is cleared upon app termination and after reviewing the FB.Init calls I saw a lot of HTTPS calls so I assumed the access_token was always encrypted. But last night I saw in the status bar a call from what was simply an http call that included the App ID so I felt I should sniff the Application Canvas load sequence. Today I did sniff the broadcast and in the attached image you can see that there are http calls with the access_token being broadcast in the open and clear for anyone to gain access to. Am I missing something, is what I am seeing and my interpretation really correct. If any one can sniff and get the access_token they can theorically make calls to the Graph API via https, even though the call back would still need to be the site established in Facebook's application set up. But what is truly a security threat is anyone using the access_token for access to their own site. I do not see the value of a single sign on via Facebook if the only thing that was established as secure was the access_token - becuase for what I can see it clearly is not secure. Access tokens that never have an expire date do not change. Access_tokens are different for every user, to access to another site could be held tight to just a single user, but compromising even a single user's data is unacceptable. http://www.creatingstory.com/images/InTheOpen.png Went back and did more research on this: FINDINGS: Went back an re ran the canvas application to verify that it was not any of my code that was not broadcasting. In this call: HTTP GET /connect.php/en_US/js/CacheData HTTP/1.1 The USER ID is clearly visible in the cookie. So USER_ID's are fully visible, but they are already. Anyone can go to pretty much any ones page and hover over the image and see the USER ID. So no big threat. APP_ID are also easily obtainable - but . . . http://www.creatingstory.com/images/InTheOpen2.png The above file clearly shows the FULL ACCESS TOKEN clearly in the OPEN via a Facebook initiated call. Am I wrong. TELL ME I AM WRONG because I want to be wrong about this. I have since reset my app secret so I am showing the real sniff of the Canvas Page being loaded. Additional data 02/20/2011: @ifaour - I appreciate the time you took to compile your response. I am pretty familiar with the OAuth process and have a pretty solid understanding of the signed_request unpacking and utilization of the access_token. I perform a substantial amount of my processing on the server and my Facebook server side flows are all complete and function without any flaw that I know of. The application secret is secure and never passed to the front end application and is also changed regularly. I am being as fanatical about security as I can be, knowing there is so much I don’t know that could come back and bite me. Two huge access_token issues: The issues concern the possible utilization of the access_token from the USER AGENT (browser). During the FB.INIT() process of the Facebook JavaScript SDK, a cookie is created as well as an object in memory called a session object. This object, along with the cookie contain the access_token, session, a secret, and uid and status of the connection. The session object is structured such that is supports both the new OAuth and the legacy flows. With OAuth, the access_token and status are pretty much al that is used in the session object. The first issue is that the access_token is used to make HTTPS calls to the GRAPH API. If you had the access_token, you could do this from any browser: https://graph.facebook.com/220439?access_token=... and it will return a ton of information about the user. So any one with the access token can gain access to a Facebook account. You can also make additional calls to any info the user has granted access to the application tied to the access_token. At first I thought that a call into the GRAPH had to have a Callback to the URL established in the App Setup, but I tested it as mentioned below and it will return info back right into the browser. Adding that callback feature would be a good idea I think, tightens things up a bit. The second issue is utilization of some unique private secured data that identifies the user to the third party data base, i.e., like in my case, I would use a single sign on to populate user information into my database using this unique secured data item (i.e., access_token which contains the APP ID, the USER ID, and a hashed with secret sequence). None of this is a problem on the server side. You get a signed_request, you unpack it with secret, make HTTPS calls, get HTTPS responses back. When a user has information entered via the USER AGENT(browser) that must be stored via a POST, this unique secured data element would be sent via HTTPS such that they are validated prior to data base insertion. However, If there is NO secured piece of unique data that is supplied via the single sign on process, then there is no way to guarantee unauthorized access. The access_token is the one piece of data that is utilized by Facebook to make the HTTPS calls into the GRAPH API. it is considered unique in regards to BOTH the USER and the APPLICATION and is initially secure via the signed_request packaging. If however, it is subsequently transmitted in the clear and if I can sniff the wire and obtain the access_token, then I can pretend to be the application and gain the information they have authorized the application to see. I tried the above example from a Safari and IE browser and it returned all of my information to me in the browser. In conclusion, the access_token is part of the signed_request and that is how the application initially obtains it. After OAuth authentication and authorization, i.e., the USER has logged into Facebook and then runs your app, the access_token is stored as mentioned above and I have sniffed it such that I see it stored in a Cookie that is transmitted over the wire, resulting in there being NO UNIQUE SECURED IDENTIFIABLE piece of information that can be used to support interaction with the database, or in other words, unless there were one more piece of secure data sent along with the access_token to my database, i.e., a password, I would not be able to discern if it is a legitimate call. Luckily I utilized secure AJAX via POST and the call has to come from the same domain, but I am sure there is a way to hijack that. I am totally open to any ideas on this topic on how to uniquely identify my USERS other than adding another layer (password) via this single sign on process or if someone would just share with me that I read and analyzed my data incorrectly and that the access_token is always secure over the wire. Mahalo nui loa in advance.

    Read the article

  • Glassfish4 throw exception when I declare validation.xml file on classpath

    - by Rafael Ruiz Tabares
    I've tried to declare a custom validator for @NotNull constraint and Glassfish4 throw this exception when find /META-INF/validation.xml. Project works fine if I omit this file. Exception while dispatching an event java.lang.IllegalStateException: Singleton not set for WebappClassLoader(delegate=true; repositories=WEB-INF/classes/) at org.glassfish.weld.ACLSingletonProvider$ACLSingleton.get(ACLSingletonProvider.java:110) at org.jboss.weld.Container.instance(Container.java:54) at org.jboss.weld.bootstrap.WeldBootstrap.shutdown(WeldBootstrap.java:644) at org.glassfish.weld.WeldDeployer.doBootstrapShutdown(WeldDeployer.java:309) at org.glassfish.weld.WeldDeployer.event(WeldDeployer.java:220) at org.glassfish.kernel.event.EventsImpl.send(EventsImpl.java:131) at org.glassfish.internal.data.ApplicationInfo.load(ApplicationInfo.java:328) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:493) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:219) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:491) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:527) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:523) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:522) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:546) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1423) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1500(CommandRunnerImpl.java:108) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1762) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1674) at org.glassfish.admin.rest.resources.admin.CommandResource.executeCommand(CommandResource.java:396) at org.glassfish.admin.rest.resources.admin.CommandResource.execCommandSimpInMultOut(CommandResource.java:234) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:125) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:152) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:91) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:346) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:341) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:101) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:224) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:198) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:946) at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:331) at org.glassfish.admin.rest.adapter.JerseyContainerCommandService$3.service(JerseyContainerCommandService.java:165) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:181) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:246) at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:191) at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:168) at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:189) at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206) at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136) at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114) at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:838) at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544) at java.lang.Thread.run(Thread.java:744) ]] [2014-06-09T19:37:52.476+0200] [glassfish 4.0] [SEVERE] [AS-WEB-CORE-00108] [javax.enterprise.web.core] [tid: _ThreadID=32 _ThreadName=admin-listener(1)] [timeMillis: 1402335472476] [levelValue: 1000] [[ ContainerBase.addChild: start: org.apache.catalina.LifecycleException: java.lang.IllegalArgumentException: javax.servlet.ServletException: com.sun.enterprise.container.common.spi.util.InjectionException: Error creating managed object for class: class org.jboss.weld.servlet.WeldListener at org.apache.catalina.core.StandardContext.start(StandardContext.java:5864) at com.sun.enterprise.web.WebModule.start(WebModule.java:691) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:1041) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:1024) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:747) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:2278) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1924) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:139) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:122) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:291) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:352) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:497) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:219) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:491) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:527) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:523) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:522) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:546) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1423) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1500(CommandRunnerImpl.java:108) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1762) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1674) at org.glassfish.admin.rest.resources.admin.CommandResource.executeCommand(CommandResource.java:396) at org.glassfish.admin.rest.resources.admin.CommandResource.execCommandSimpInMultOut(CommandResource.java:234) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:125) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:152) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:91) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:346) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:341) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:101) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:224) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:198) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:946) at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:331) at org.glassfish.admin.rest.adapter.JerseyContainerCommandService$3.service(JerseyContainerCommandService.java:165) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:181) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:246) at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:191) at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:168) at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:189) at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206) at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136) at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114) at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:838) at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544) at java.lang.Thread.run(Thread.java:744) Caused by: java.lang.IllegalArgumentException: javax.servlet.ServletException: com.sun.enterprise.container.common.spi.util.InjectionException: Error creating managed object for class: class org.jboss.weld.servlet.WeldListener at org.apache.catalina.core.StandardContext.addListener(StandardContext.java:3270) at org.apache.catalina.core.StandardContext.addApplicationListener(StandardContext.java:2476) at com.sun.enterprise.web.TomcatDeploymentConfig.configureApplicationListener(TomcatDeploymentConfig.java:251) at com.sun.enterprise.web.TomcatDeploymentConfig.configureWebModule(TomcatDeploymentConfig.java:110) at com.sun.enterprise.web.WebModuleContextConfig.start(WebModuleContextConfig.java:266) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:486) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:163) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5861) ... 66 more Caused by: javax.servlet.ServletException: com.sun.enterprise.container.common.spi.util.InjectionException: Error creating managed object for class: class org.jboss.weld.servlet.WeldListener at org.apache.catalina.core.StandardContext.createListener(StandardContext.java:3391) at org.apache.catalina.core.StandardContext.loadListener(StandardContext.java:5414) at com.sun.enterprise.web.WebModule.loadListener(WebModule.java:1788) at org.apache.catalina.core.StandardContext.addListener(StandardContext.java:3268) ... 73 more Caused by: com.sun.enterprise.container.common.spi.util.InjectionException: Error creating managed object for class: class org.jboss.weld.servlet.WeldListener at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.createManagedObject(InjectionManagerImpl.java:329) at com.sun.enterprise.web.WebContainer.createListenerInstance(WebContainer.java:1015) at com.sun.enterprise.web.WebModule.createListenerInstance(WebModule.java:2158) at org.apache.catalina.core.StandardContext.createListener(StandardContext.java:3389) ... 76 more Caused by: java.lang.NullPointerException at org.jboss.weld.bootstrap.WeldBootstrap.getManager(WeldBootstrap.java:435) at org.glassfish.weld.services.JCDIServiceImpl.createManagedObject(JCDIServiceImpl.java:320) at org.glassfish.weld.services.JCDIServiceImpl.createManagedObject(JCDIServiceImpl.java:263) at com.sun.enterprise.container.common.impl.managedbean.ManagedBeanManagerImpl.createManagedBean(ManagedBeanManagerImpl.java:485) at com.sun.enterprise.container.common.impl.managedbean.ManagedBeanManagerImpl.createManagedBean(ManagedBeanManagerImpl.java:439) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.createManagedObject(InjectionManagerImpl.java:313) ... 79 more This is constraint xml file <constraint-definition annotation="org.hibernate.validator.constraints.NotNull"> <validated-by include-existing-validators="true"> <value>es.project.validator.customConstraint.NotEmptyValidator</value> </validated-by> </constraint-definition> And validation.xml <validation-config xmlns="http://jboss.org/xml/ns/javax/validation/configuration" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/configuration validation-configuration-1.0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <constraint-mapping>META-INF/validation/mapping.xml</constraint-mapping> Project's structure WEB-INF +----\classes +-------\META-INF ------- validation.xml ----------\validation +----------\mapping.xml Validator code import javax.validation.ConstraintValidator; import javax.validation.ConstraintValidatorContext; import javax.validation.constraints.NotNull; import org.hibernate.validator.constraintvalidation.HibernateConstraintValidatorContext; public class NotEmptyValidator implements ConstraintValidator<NotNull,Object> { @Override public void initialize(NotNull constraintAnnotation) { } @Override public boolean isValid(Object value, ConstraintValidatorContext context) { if(value.toString().isEmpty()){ ........... ........... ........... } return true; } }

    Read the article

  • How to debug MySQL/Doctrine2 Queries?

    - by jiewmeng
    I am using MySQL with Zend Framework & Doctrine 2. I think even if you don't use Doctrine 2, you will be familiar with errors like SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ASC' at line 1 The problem is that I don't see the full query. Without an ORM framework, I could probably echo the sql easily, but with a framework, how can I find out what SQL its trying to execute? I narrowed the error down to $progress = $task->getProgress(); $progress is declared // Application\Models\Task /** * @OneToMany(targetEntity="TaskProgress", mappedBy="task") * @OrderBy({"seq" = "ASC"}) */ protected $progress; In MySQL, the task class looks like CREATE TABLE `tasks` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` int(11) DEFAULT NULL, `assigned_id` int(11) DEFAULT NULL, `list_id` int(11) DEFAULT NULL, `name` varchar(60) NOT NULL, `seq` int(11) DEFAULT NULL, PRIMARY KEY (`id`), KEY `tasks_owner_id_idx` (`owner_id`), KEY `tasks_assigned_id_idx` (`assigned_id`), KEY `tasks_list_id_idx` (`list_id`), CONSTRAINT `tasks_ibfk_1` FOREIGN KEY (`owner_id`) REFERENCES `users` (`id`), CONSTRAINT `tasks_ibfk_2` FOREIGN KEY (`assigned_id`) REFERENCES `users` (`id`), CONSTRAINT `tasks_ibfk_3` FOREIGN KEY (`list_id`) REFERENCES `lists` (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1$$

    Read the article

  • The type of field isn't supported by declared persistence strategy "OneToMany"

    - by Robert
    We are new to JPA and trying to setup a very simple one to many relationship where a pojo called Message can have a list of integer group id's defined by a join table called GROUP_ASSOC. Here is the DDL: CREATE TABLE "APP"."MESSAGE" ( "MESSAGE_ID" INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1) ); ALTER TABLE "APP"."MESSAGE" ADD CONSTRAINT "MESSAGE_PK" PRIMARY KEY ("MESSAGE_ID"); CREATE TABLE "APP"."GROUP_ASSOC" ( "GROUP_ID" INTEGER NOT NULL, "MESSAGE_ID" INTEGER NOT NULL ); ALTER TABLE "APP"."GROUP_ASSOC" ADD CONSTRAINT "GROUP_ASSOC_PK" PRIMARY KEY ("MESSAGE_ID", "GROUP_ID"); ALTER TABLE "APP"."GROUP_ASSOC" ADD CONSTRAINT "GROUP_ASSOC_FK" FOREIGN KEY ("MESSAGE_ID") REFERENCES "APP"."MESSAGE" ("MESSAGE_ID"); Here is the pojo: @Entity @Table(name = "MESSAGE") public class Message { @Id @Column(name = "MESSAGE_ID") @GeneratedValue(strategy = GenerationType.IDENTITY) private int messageId; @OneToMany(fetch=FetchType.LAZY, cascade=CascadeType.PERSIST) private List groupIds; public int getMessageId() { return messageId; } public void setMessageId(int messageId) { this.messageId = messageId; } public List getGroupIds() { return groupIds; } public void setGroupIds(List groupIds) { this.groupIds = groupIds; } } When we try to execute the following test code we get <openjpa-1.2.3-SNAPSHOT-r422266:907835 fatal user error> org.apache.openjpa.util.MetaDataException: The type of field "pojo.Message.groupIds" isn't supported by declared persistence strategy "OneToMany". Please choose a different strategy. Message msg = new Message(); List groups = new ArrayList(); groups.add(101); groups.add(102); EntityManager em = Persistence.createEntityManagerFactory("TestDBWeb").createEntityManager(); em.getTransaction().begin(); em.persist(msg); em.getTransaction().commit(); Help!

    Read the article

  • Symfony fk issue on insertion

    - by Daniel Hertz
    Hi, I posted a similar problem but it could not be resolved. I create a relational database of users and groups but for some reason I cannot insert test data with fixtures properly. Here is a sample of the schema: User: actAs: { Timestampable: ~ } columns: name: { type: string(255), notnull: true } email: { type: string(255), notnull: true, unique: true } nickname: { type: string(255), unique: true } password: { type: string(300), notnull: true } image: { type: string(255) } Group: actAs: { Timestampable: ~ } columns: name: { type: string(500), notnull: true } image: { type: string(255) } type: { type: string(255), notnull: true } created_by_id: { type: integer } relations: User: { onDelete: SET NULL, class: User, local: created_by_id, foreign: id, foreignAlias: groups_created } FanOf: actAs: { Timestampable: ~ } columns: user_id: { type: integer, primary: true } group_id: { type: integer, primary: true } relations: User: { onDelete: CASCADE, local: user_id, foreign: id, foreignAlias: fanhood } Group: { onDelete: CASCADE, local: group_id, foreign: id, foreignAlias: fanhood } And this is the data i try to input: User: user1: name: Danny email: [email protected] nickname: danny password: f05050400c5e586fa6629ef497be Group: group1: name: Mets type: sports FanOf: fans1: user_id: user1 group_id: group1 I keep getting this error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`krowdd`.`fan_of`, CONSTRAINT `fan_of_user_id_user_id` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`) ON DELETE CASCADE) The users and groups are clearly being created before the "fanhood" is so why am I getting this error?? Thanks!

    Read the article

  • How I shoud use BIT in MS SQL 2005

    - by adopilot
    Regarding to SQL performance. I have Scalar-Valued function for checking some specific condition in base, It returns BIT value True or False, I now do not know how I should fill @BIT parameter If I write. set @bit = convert(bit,1) or set @bit = 1 or set @bit='true' Function will work anyway but I do not know which method is recommended for daily use. Another Question, I have table in my base with around 4 million records, Daily insert is about 4K records in that table. Now I want to add CONSTRAINT on that table whit scalar valued function that I mentioned already Something like this ALTER TABLE fin_stavke ADD CONSTRAINT fin_stavke_knjizenje CHECK ( dbo.fn_ado_chk_fin(id)=convert(bit,1)) Where is filed "id" primary key of table fin_stavke and dbo.fn_ado_chk_fin looks like create FUNCTION fn_ado_chk_fin ( @stavka_id int ) RETURNS bit AS BEGIN declare @bit bit if exists (select * from fin_stavke where id=@stavka_id and doc_id is null and protocol_id is null) begin set @bit=0 end else begin set @bit=1 end return @bit; END GO Will this type and method of cheeking constraint will affect badly performance on my table and SQL at all ? If there is also better way to add control on this table please let me know.

    Read the article

  • Entity Framework 4 and SYSUTCDATETIME ()

    - by GIbboK
    Hi, I use EF4 and C#. I have a Table in my DataBase (MS SQL 2008) with a default value for a column SYSUTCDATETIME (). The Idea is to automatically add Date and Time as soon as a new record is Created. I create my Conceptual Model using EF4, and I have created an ASP.PAGE with a DetailsView Control in INSERT MODE. My problems: When I create a new Record. EF is not able to insert the actual Date and Time value but it inserts instead this value 0001-01-01 00:00:00.00. I suppose the EF is not able to use SYSUTCDATETIME () defined in my DataBase Any idea how to solve it? Thanks Here my SQL script CREATE TABLE dbo.CmsAdvertisers ( AdvertiserId int NOT NULL IDENTITY CONSTRAINT PK_CmsAdvertisers_AdvertiserId PRIMARY KEY, DateCreated dateTime2(2) NOT NULL CONSTRAINT DF_CmsAdvertisers_DateCreated DEFAULT sysutcdatetime (), ReferenceAdvertiser varchar(64) NOT NULL, NoteInternal nvarchar(256) NOT NULL CONSTRAINT DF_CmsAdvertisers_NoteInternal DEFAULT '' ); My Temporary solution: Please guys help me on this e.Values["DateCreated"] = DateTime.UtcNow; More info here: http://msdn.microsoft.com/en-us/library/bb387157.aspx How to use the default Entity Framework and default date values http://msdn.microsoft.com/en-us/library/dd296755.aspx

    Read the article

  • Select highest rated, oldest track

    - by Blair McMillan
    I have several tables: CREATE TABLE [dbo].[Tracks]( [Id] [uniqueidentifier] NOT NULL, [Artist_Id] [uniqueidentifier] NOT NULL, [Album_Id] [uniqueidentifier] NOT NULL, [Title] [nvarchar](255) NOT NULL, [Length] [int] NOT NULL, CONSTRAINT [PK_Tracks_1] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TrackHistory]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [Datetime] [datetime] NOT NULL, CONSTRAINT [PK_TrackHistory] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[TrackHistory] ([Track_Id] ,[Datetime]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,GETDATE()) CREATE TABLE [dbo].[Ratings]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [User_Id] [uniqueidentifier] NOT NULL, [Rating] [tinyint] NOT NULL, CONSTRAINT [PK_Ratings] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[Ratings] ([Track_Id] ,[User_Id] ,[Rating]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,"C7D62450-8BE6-40F6-80F1-A539DA301772" ,1) Users User_Id|Guid Other fields Links between the tables are pretty obvious. TrackHistory has each track added to it as a row whenever it is played ie. a track will appear in there many times. Ratings value will either be 1 or -1. What I'm trying to do is select the Track with the highest rating, that is more than 2 hours old, and if there is a duplicate rating for a track (ie a track receives 6 +1 ratings and 1 - rating, giving that track a total rating of 5, another track also has a total rating of 5), the track that was last played the longest ago should be returned. (If all tracks have been played within the last 2 hours, no rows should be returned) I'm getting somewhere doing each part individually using the link above, SUM(Value) and GROUP BY Track_Id, but I'm having trouble putting it all together. Hopefully someone with a bit more (MS)SQL knowledge will be able to help me. Many thanks!

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

  • SQL Server: Clustering by timestamp; pros/cons

    - by Ian Boyd
    I have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means I want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But I can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. I could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic I want for a candidate cluster key. So I cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, I use what I already have. What I'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • SQL Server: Clutering by timestamp; pros/cons

    - by Ian Boyd
    i have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means i want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But i can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. i could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic i want for a candidate cluster key. So i cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, i use what i already have. What i'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • How to reference using Entity Framework and Asp.Net Mvc 2

    - by Picflight
    Tables CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [UserName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Email] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [BirthDate] [smalldatetime] NULL, [CountryId] [int] NULL, CONSTRAINT [PK_Users] PRIMARY KEY CLUSTERED ([UserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TeamMember]( [UserId] [int] NOT NULL, [TeamMemberUserId] [int] NOT NULL, [CreateDate] [smalldatetime] NOT NULL CONSTRAINT [DF_TeamMember_CreateDate] DEFAULT (getdate()), CONSTRAINT [PK_TeamMember] PRIMARY KEY CLUSTERED ([UserId] ASC, [TeamMemberUserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] dbo.TeamMember has both UserId and TeamMemberUserId as the index key. My goal is to show a list of Users on my View. In the list I want to flag, or highlight the Users that are Team Members of the LoggedIn user. My ViewModel public class UserViewModel { public int UserId { get; private set; } public string UserName { get; private set; } public bool HighLight { get; private set; } public UserViewModel(Users users, bool highlight) { this.UserId = users.UserId; this.UserName = users.UserName; this.HighLight = highlight; } } View <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MvcPaging.IPagedList<MyProject.Mvc.Models.UserViewModel>>" %> <% foreach (var item in Model) { %> <%= item.UserId %> <%= item.UserName %> <%if (item.HighLight) { %> Team Member <% } else { %> Not Team Member <% } %> How do I toggle the TeamMember or Not If I add dbo.TeamMember to the EDM, there are no relationships on this table, how will I wire it to Users object? So I am comparing the LoggedIn UserId with this list(SELECT TeamMemberUserId FROM TeamMember WHERE UserId = @LoggedInUserId)

    Read the article

  • how to use a parameterized function for the Default Binding of a Sql Server column

    - by Walt Gaber
    I have a table that catalogs selected files from multiple sources. I want to record whether a file is a duplicate of a previously cataloged file at the time the new file is cataloged. I have a column in my table (“primary_duplicate”) to record each entry as ‘P’ (primary) or ‘D’ (duplicate). I would like to provide a Default Binding for this column that would check for other occurrences of this file (i.e. name, length, timestamp) at the time the new file is being recorded. I have created a function that performs this check (see “GetPrimaryDuplicate” below). But I don’t know how to bind this function which requires three parameters to the table’s “primary_duplicate” column as its Default Binding. I would like to avoid using a trigger. I currently have a stored procedure used to insert new records that performs this check. But I would like to ensure that the flag is set correctly if an insert is performed outside of this stored procedure. How can I call this function with values from the row that is being inserted? USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[FileCatalog]( [id] [uniqueidentifier] NOT NULL, [catalog_timestamp] [datetime] NOT NULL, [primary_duplicate] nchar NOT NULL, [name] nvarchar NULL, [length] [bigint] NULL, [timestamp] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_id] DEFAULT (newid()) FOR [id] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_catalog_timestamp] DEFAULT (getdate()) FOR [catalog_timestamp] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_primary_duplicate] DEFAULT (N'GetPrimaryDuplicate(name, length, timestamp)') FOR [primary_duplicate] GO USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[GetPrimaryDuplicate] ( @name nvarchar(255), @length bigint, @timestamp datetime ) RETURNS nchar(1) AS BEGIN DECLARE @c int SELECT @c = COUNT(*) FROM FileCatalog WHERE name=@name and length=@length and timestamp=@timestamp and primary_duplicate = 'P' IF @c > 0 RETURN 'D' -- Duplicate RETURN 'P' -- Primary END GO

    Read the article

  • "date_part('epoch', now() at time zone 'UTC')" not the same time as "now() at time zone 'UTC'" in po

    - by sirlark
    I'm writing a web based front end to a database (PHP/Postgresql) in which I need to store various dates/times. The times are meant to be always be entered on the client side in the local time, and displayed in the local time too. For storage purposes, I store all dates/times as integers (UNIX timestamps) and normalised to UTC. One particular field has a restriction that the timestamp filled in is not allowed to be in the future, so I tried this with a database constraint... CONSTRAINT not_future CHECK (timestamp-300 <= date_part('epoch', now() at time zone 'UTC')) The -300 is to give 5 minutes leeway in case of slightly desynchronised times between browser and server. The problem is, this constraint always fails when submitting the current time. I've done testing, and found the following. In PostgreSQL client: SELECT now() -- returns correct local time SELECT date_part('epoch', now()) -- returns a unix timestamp at UTC (tested by feeding the value into the date function in PHP correcting for its compensation to my time zone) SELECT date_part('epoch', now() at time zone 'UTC') -- returns a unix timestamp at two time zone offsets west, e.g. I am at GMT+2, I get a GMT-2 timestamp. I've figured out obviously that dropping the "at time zone 'UTC'" will solve my problem, but my question is if 'epoch' is meant to return a unix timestamp which AFAIK is always meant to be in UTC, why would the 'epoch' of a time already in UTC be corrected? Is this a bug, or I am I missing something about the defined/normal behaviour here.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >