Search Results

Search found 20475 results on 819 pages for 'multiple repositories'.

Page 295/819 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • Creating Tables in DokuWiki

    - by Bryan
    I'm trying to create a table in DokuWiki, with a cell that vertically spans, however unlike the examples in the syntax guide, the cell I want to create has more than one row of text. The following is an ASCII version of what I'm trying to achieve +-----------+-----------+ | Heading 1 | Heading 2 | +-----------+-----------+ | | Multiple | | Some text | rows of | | | text | +-----------+-----------+ I've tried the following syntax ^ Heading 1 ^ Heading 2 ^ | Some text | Multiple | | ::: | rows of | | ::: | text | but this generates the output +-----------+-----------+ | Heading 1 | Heading 2 | +-----------+-----------+ | | Multiple | | +-----------+ | Some text | rows of | | +-----------+ | | text | +-----------+-----------+ I can't find anything in the DokuWiki documentation, so I'm hoping I'm missing something fundamentally simple?

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Farseer: Cutting body from texture

    - by Robin Betka
    Is it possible to cut a body from a texture in Farseer 3.0? I have a texture converted to a body with multiple fixtures ( using BayazitDecomposer, CreatePolygon method, ..) and can even do it as a BreakableBody. But when I try to cut it with the cutting tool, the fixture itself gets cutted but it's connections get discarded! So when I have 14 fixtures, and cut fixture 3 for example, fixture 3 gets cutted but 1,2 and 3-14 just go away. Is there a way to do it? It would work already if I could convert the texture into a body with 1 fixture only, but I haven't figured out it that's possible. BayazitDecomposer creates the multiple verticles, but letting it away creates something weird and I get assert messages all the time. I know I couldn't break it that way but I don't need that anyway when I could cut it. The breaking is just the work around I'm using now. Extending the cuttingtool to support multiple fixtures is very hard especially when you consider that in one cut multiple fixtures could be cutted and then connected again.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Creating Tables in DocuWiki

    - by Bryan
    I'm trying to create a table in DokuWiki, with a cell that vertically spans, however unlike the examples in the syntax guide, the cell I want to create has more than one row of text. The following is an ASCII version of what I'm trying to achieve +-----------+-----------+ | Heading 1 | Heading 2 | +-----------+-----------+ | | Multiple | | Some text | rows of | | | text | +-----------+-----------+ I've tried the following syntax ^ Heading 1 ^ Heading 2 ^ | Some text | Multiple | | ::: | rows of | | ::: | text | but this generates the output +-----------+-----------+ | Heading 1 | Heading 2 | +-----------+-----------+ | | Multiple | | +-----------+ | Some text | rows of | | +-----------+ | | text | +-----------+-----------+ I can't find anything in the DokuWiki documentation, so I'm hoping I'm missing something fundamentally simple?

    Read the article

  • Program that groups windows into tabs

    - by Arithmomaniac
    I recall once stumbling on a program that could take multiple application windows and wrap them inside a large window with a tabbed interface. One use of this, for example, would be to wrap multiple instances of Excel into one window, and thus icon on the taskbar. I couldn't find mention of this program via Google, because of the multiple meanings of the word "window". Does anyone remember, or know of, such a program?

    Read the article

  • Scene or Activity Animation

    - by Siddharth
    My game require an animation when one activity finishes and next started because I have develop game with multiple activity not as multiple scene per game. I have to show animation at the time of activity creation and activity destroy. I have trying to create basic animation that was supported by android. And all that xml file I have to post it into the anim folder but the loading of resource was so much high so any type of animation I provide using android method does not work for me it look weird. If scene class has some functionality for animation that please know me then I try to load different type of animation using scene. I have not create multiple scene because I have no awareness about how to manage multiple scene in andengine though I have a working experience of 8 months in andengine. So this help also provide me a great help. Basically I want to create animation like one activity slide out at the same time the other activity slide in. So at a time user can see the transition of activity. Thanks in advance.

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

  • Is DataRow thread safe? How to update a single datarow in a datatable using multiple threads? - .net

    - by NLV
    Hello all I want to update a single datarow in a datatable using multiple threads. Is this actually possible? I've written the following code implementing a simple multi-threading to update a single datarow. I get different results each time. Why is it so? public partial class Form1 : Form { private static DataTable dtMain; private static string threadMsg = string.Empty; public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { Thread[] thArr = new Thread[5]; dtMain = new DataTable(); dtMain.Columns.Add("SNo"); DataRow dRow; dRow = dtMain.NewRow(); dRow["SNo"] = 5; dtMain.Rows.Add(dRow); dtMain.AcceptChanges(); ThreadStart ts = new ThreadStart(delegate { dtUpdate(); }); thArr[0] = new Thread(ts); thArr[1] = new Thread(ts); thArr[2] = new Thread(ts); thArr[3] = new Thread(ts); thArr[4] = new Thread(ts); thArr[0].Start(); thArr[1].Start(); thArr[2].Start(); thArr[3].Start(); thArr[4].Start(); while (!WaitTillAllThreadsStopped(thArr)) { Thread.Sleep(500); } foreach (Thread thread in thArr) { if (thread != null && thread.IsAlive) { thread.Abort(); } } dgvMain.DataSource = dtMain; } private void dtUpdate() { for (int i = 0; i < 1000; i++) { try { dtMain.Rows[0][0] = Convert.ToInt32(dtMain.Rows[0][0]) + 1; dtMain.AcceptChanges(); } catch { continue; } } } private bool WaitTillAllThreadsStopped(Thread[] threads) { foreach (Thread thread in threads) { if (thread != null && thread.ThreadState == ThreadState.Running) { return false; } } return true; } } Any thoughts on this? Thank you NLV

    Read the article

  • Mercurial Tagging/Branching Strategy

    - by Tony Trozzo
    My current project is broken down into 3 parts: Website, Desktop Client, and a Plug-in for a third party program. We had started out originally with Subversion for our source control but decided to try Mercurial after reading Joel Spolsky's final post. Considering we haven't really used the majority of svn's potential before, we figured starting fresh with some basic ideas of how source control worked would make this transition easy. However, after setting up our initial repository, we're lost as to how tagging and branching should work on a project like this. Essentially, we're working on all 3 of these parts at the same time. We want a release to be a combination of the 3 parts. Currently we're working in one repository. For the Plug-in part, we have the first iteration finished which we've been referring to as Plug-In v0.1. For the first official build of the other two parts, we'd also like to refer to them as Website v0.1 and Desktop Client v0.1. When all three parts are at v0.1, we'd like to have a Full Project v0.1. Our problem is we're not sure how to manage all of this in the Hg repository. Would the best way to handle this be to create 3 separate repositories for the 3 stable versions and then 3 more repositories for the current developments? Currently we have this all in one repository. Should we do this in branches (are branches any different from cloning repositories?) and tags? Any help is greatly appreciated.

    Read the article

  • Castle Windsor upgrade causes TypeLoadException for generic types

    - by Neil Barnwell
    I have the following mapping in my Castle Windsor xml file which has worked okay (unchanged) for some time: <component id="defaultBasicRepository" service="MyApp.Models.Repositories.IBasicRepository`1, MyApp.Models" type="MyApp.Models.Repositories.Linq.BasicRepository`1, MyApp.Models" lifestyle="perWebRequest"/> I got this from the Windsor documentation at http://www.castleproject.org/container/documentation/v1rc3/usersguide/genericssupport.html. Since I upgraded Windsor, I now get the following exception at runtime: Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: GenericArguments[0], 'T', on 'MyApp.Models.Repositories.Linq.BasicRepository`1[TEntity]' violates the constraint of type parameter 'TEntity'. Source Error: Line 44: public static void ConfigureIoC() Line 45: { Line 46: var windsor = new WindsorContainer("Windsor.xml"); Line 47: Line 48: ServiceLocator.SetLocatorProvider(() = new WindsorServiceLocator(windsor)); I'm using ASP.NET MVC 1.0, Visual Studio 2008 and Castle Windsor as downloaded from http://sourceforge.net/projects/castleproject/files/InversionOfControl/2.1/Castle-Windsor-2.1.1.zip/download Can anyone shed any light on this? I'm sure the upgrade of Castle Windsor is what caused it - it's been working well for ages.

    Read the article

  • WebSVN with VisualSVN Server, anyone gotten authentication to work?

    - by Lasse V. Karlsen
    I have a VisualSVN Server installed on a Windows server, serving several repositories. Since the web-viewer built into VisualSVN server is a minimalistic subversion browser, I'd like to install WebSVN on top of my repositories. The problem, however, is that I can't seem to get authentication to work. Ideally I'd like my current repository authentication as specified in VisualSVN to work with WebSVN, so that though I see all the repository names in WebSVN, I can't actually browse into them without the right credentials. By visiting the cached copy of the topmost link on this google query you can see what I've found so far that looks promising. (the main blog page seems to have been destroyed, domain of the topmost page I'm referring to is the-wizzard.de) There I found some php functions I could tack onto one of the php files in WebSVN. I followed the modifications there, but all I succeeded in doing was make WebSVN ask me for a username and password and no matter what I input, it won't let me in. Unfortunately, php and apache is largely black magic to me. So, has anyone successfully integrated WebSVN with VisualSVN hosted repositories?

    Read the article

  • DDD and Client/Server apps

    - by Christophe Herreman
    I was wondering if any of you had successfully implemented DDD in a Client/Server app and would like to share some experiences. We are currently working on a smart client in Flex and a backend in Java. On the server we have a service layer exposed to the client that offers CRUD operations amongst some other service methods. I understand that in DDD these services should be repositories and services should be used to handle use cases that do not fit inside a repository. Right now, we mimic these services on the client behind an interface and inject implementations (Webservices, RMI, etc) via an IoC container. So some questions arise: should the server expose repositories to the client or do we need to have some sort of a facade (that is able to handle security for instance) should the client implement repositories (and DDD in general?) knowing that in the client, most of the logic is view related and real business logic lives on the server. All communication with the server happens asynchronously and we have a single threaded programming model on the client. how about mapping client to server objects and vice versa? We tried DTO's but reverted back to exposing the state of our objects and mapping directly to them. I know this is considered bad practice, but it saves us an incredible amount of time) In general I think a new generation of applications is coming with the growth of Flex, Silverlight, JavaFX and I'm curious how DDD fits into this.

    Read the article

  • Subversion svn:externals - What's wrong here?

    - by Brandon Montgomery
    I first want to say I've read the Subversion manual. I've read this question. I've also read this question. Here's my dilemma. Let's say I have 3 repositories laid out like this: DataAccessObject/ branches/ tags/ trunk/ DataAccessObject/ DataAccessObjectTests/ PlanObject/ branches/ tags/ trunk/ PlanObject/ PlanObjectTests/ WinFormsPlanViewer/ branches/ tags/ trunk/ WinFormsPlanViewer/ The PlanObject and DataAccessObject repositories contain shared projects. They are used by the WinFormsPlanViewer, but also by several other projects in several other repositories. Bear with me here. I put an svn:externals definition on the WinFormsPlanViewer/trunk folder like this: https://server/svn/PlanObject/trunk Objects https://server/svn/DataAccessObject/trunk Objects And here's what I see after I do an svn update. WinFormsPlanViewer/ branches/ tags/ trunk/ WinFormsPlanViewer/ Objects/ DataAccessObject/ DataAccessObjectTests/ The PlanObject stuff doesn't even come down in the update! I don't know if this has anything to do with it, but there's an externals definition on the PlanObject/trunk folder also: https://server/svn/DataAccessObject/trunk Objects What's going on here? What am I doing wrong? Are there bad consequences of referencing the PlanObject and the DataAccessObject from the WinFormsPlanViewer using svn:externals when the PlanObject references the DataAccessObject using svn:externals also?

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • Avoiding anemic domain model - a real example

    - by cbp
    I am trying to understand Anemic Domain Models and why they are supposedly an anti-pattern. Here is a real world example. I have an Employee class, which has a ton of properties - name, gender, username, etc public class Employee { public string Name { get; set; } public string Gender { get; set; } public string Username { get; set; } // Etc.. mostly getters and setters } Next we have a system that involves rotating incoming phone calls and website enquiries (known as 'leads') evenly amongst sales staff. This system is quite complex as it involves round-robining enquiries, checking for holidays, employee preferences etc. So this system is currently seperated out into a service: EmployeeLeadRotationService. public class EmployeeLeadRotationService : IEmployeeLeadRotationService { private IEmployeeRepository _employeeRepository; // ...plus lots of other injected repositories and services public void SelectEmployee(ILead lead) { // Etc. lots of complex logic } } Then on the backside of our website enquiry form we have code like this: public void SubmitForm() { var lead = CreateLeadFromFormInput(); var selectedEmployee = Kernel.Get<IEmployeeLeadRotationService>() .SelectEmployee(lead); Response.Write(employee.Name + " will handle your enquiry. Thanks."); } I don't really encounter many problems with this approach, but supposedly this is something that I should run screaming from because it is an Anemic Domain Model. But for me its not clear where the logic in the lead rotation service should go. Should it go in the lead? Should it go in the employee? What about all the injected repositories etc that the rotation service requires - how would they be injected into the employee, given that most of the time when dealing with an employee we don't need any of these repositories?

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • Keeping track of dependency revisions

    - by Samaursa
    I have a project with several dependencies that are in various repositories. Each time I commit changes to my project, I make sure I write the revision numbers of all the dependent repositories so that in the event I ever have to come back to this revision (let's call it 5), I can immediately know which revisions of the dependent repositories revision 5 is guaranteed to work with, update the dependencies to the specified revisions, compile and run the project. So for example if I have: Dep1 @ Revisions 10 Dep2 @ Revisions 20 Dep3 @ Revisions 10 Proj @ Revisions 35 And let's say that when Proj was on revision 17, the Dep1 revision was 5, Dep2 revision was 13 and Dep3 revision was 3. So in my SVN logs, I recorded something like this: !! Works with Dep1 Rev 5, Dep2 Rev 13, Dep3 Rev 3 To me this seems primitive and makes me believe that there is a better way to do it. Now in one of my other questions, Ivy Dependency Manager has been recommended. I have not looked at it in detail yet (seems complicated and yet another thing I must learn). To me it seems like the log of SVN (and Mercurial etc.) could have been split into Log and Dependencies (if any) where the latter could be switched off if there were no dependencies (unless of course I am unaware of an easier/better solution). This would allow for a cleaner log that maybe even warned at each new commit to check the previously defined dependencies again and make sure they have not changed. So, I was wondering how everyone manages this situations and if you have any tips, techniques, programs, suggestions that you can offer. Thank you.

    Read the article

  • Project Naming Convention Feedback Please

    - by Sam Striano
    I am creating a ASP.NET MVC 3 application using Entity Framework 4. I am using the Repository/Service Pattern and was looking for feedback. I currently have the following: MVC Application (GTG.dll) GTG GTG.Controllers GTG.ViewModels Business POCO's (GTG.Business.dll) This contains all business objects (Customer, Order, Invoice, etc...) EF Model/Repositories (GTG.Data.dll) GTG.Business (GTG.Context.tt) I used the Entity POCO Generator Templates. GTG.Data.Repositories Service Layer (GTG.Data.Services.dll) GTG.Data.Services - Contains all of the service objects, one per aggregate root. The following is a little sample code: Controller Namespace Controllers Public Class HomeController Inherits System.Web.Mvc.Controller Function Index() As ActionResult Return View(New Models.HomeViewModel) End Function End Class End Namespace Model Namespace Models Public Class HomeViewModel Private _Service As CustomerService Public Property Customers As List(Of Customer) Public Sub New() _Service = New CustomerService _Customers = _Service.GetCustomersByBusinessName("Striano") End Sub End Class End Namespace Service Public Class CustomerService Private _Repository As ICustomerRepository Public Sub New() _Repository = New CustomerRepository End Sub Function GetCustomerByID(ByVal ID As Integer) As Customer Return _Repository.GetByID(ID) End Function Function GetCustomersByBusinessName(ByVal Name As String) As List(Of Customer) Return _Repository.Query(Function(x) x.CompanyName.StartsWith(Name)).ToList End Function End Class Repository Namespace Data.Repositories Public Class CustomerRepository Implements ICustomerRepository Public Sub Add(ByVal Entity As Business.Customer) Implements IRepository(Of Business.Customer).Add End Sub Public Sub Delete(ByVal Entity As Business.Customer) Implements IRepository(Of Business.Customer).Delete End Sub Public Function GetByID(ByVal ID As Integer) As Business.Customer Implements IRepository(Of Business.Customer).GetByID Using db As New GTGContainer Return db.Customers.FirstOrDefault(Function(x) x.ID = ID) End Using End Function Public Function Query(ByVal Predicate As System.Linq.Expressions.Expression(Of System.Func(Of Business.Customer, Boolean))) As System.Linq.IQueryable(Of Business.Customer) Implements IRepository(Of Business.Customer).Query Using db As New GTGContainer Return db.Customers.Where(Predicate) End Using End Function Public Sub Save(ByVal Entity As Business.Customer) Implements IRepository(Of Business.Customer).Save End Sub End Class End Namespace

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Best Practices For Database Consolidation On Exadata - New Whitepapers

    - by Javier Puerta
     Best Practices For Database Consolidation On Exadata Database Machine (Nov. 2011) Consolidation can minimize idle resources, maximize efficiency, and lower costs when you host multiple schemas, applications or databases on a target system. Consolidation is a core enabler for deploying Oracle database on public and private clouds.This paper provides the Exadata Database Machine (Exadata) consolidation best practices to setup and manage systems and applications for maximum stability and availability:Download here Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles (Sep. 2011) This paper is focused on the aspects of segregating databases from each other in a platform consolidation environment on an Oracle Exadata Database Machine. Platform consolidation is the consolidation of multiple databases on to a single Oracle Exadata Database Machine. When multiple databases are consolidated on a single Database Machine, it may be necessary to isolate certain database components or functions in order to meet business requirements and provide best practices for a secure consolidation. In this paper we outline the use of Oracle Exadata database-scoped security to securely separate database management and provide a detailed case study that illustrates the best practices. Download here

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • Creating Hosting Accounts in WHM on a Single IP

    - by Daniel Hanly
    I've just purchased a VPS with the hope of transferring multiple shared hosting accounts onto it. The problem is that I've only got 2 IP addresses with my VPS. I can create an account and assign it an IP address, but once I've done this once, I can't do it again. (1 IP address is my main root WHM IP, the other is my new hosting account IP). Can I create multiple hosting accounts and use the same IP? How would I manage multiple hosting accounts in this way? The domain for this hosting account has been purchased by the client, and they hold it (can't transfer for 60 days), so I need to adjust the DNS settings to redirect to my newly created hosting area - how can I do this without a dedicated IP address?

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >