Search Results

Search found 18245 results on 730 pages for 'recursive query'.

Page 523/730 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • SQL Server 2000 tables

    - by user40766
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • HTML5Rocks Live, Episode 1

    HTML5Rocks Live, Episode 1 In this episode of HTML5Rocks Live, Boris, Eric and Paul join us to show some great new libraries and performance tips. Please leave your comments on our plus page at goo.gl In the first chapter, Paul shows how to use some of Chrome's new developer tools to understand how things are rendering and get improved performance. In the second chapter (21:25), Boris shows off his new device.js library to help make development of mobile web applications and sites easier. Eric closes the hangout (40:00) and talks about his new file system API polyfill that uses indexed db as it's back end. 02:15 Scroll Effects Demo goo.gl 23:04 - Media Queries Site goo.gl 24:15 - WURFL goo.gl 26:40 - Boris' Device Library goo.gl 29:28 - Device.js Demo goo.gl 33:25 - Bug to add touch-enabled media query to Chrome, please star goo.gl 35:00 - Chrome's DevTools for Mobile Development 38:56 - Paul Irish's Touch Demos goo.gl 40:43 - File System API Book goo.gl 43:10 - Eric's idb.filesystem.js goo.gl 44:27 - idb.filesystem HTML5 File System Demo goo.gl 47:33 - HTML5 Filesystem Playground goo.gl From: GoogleDevelopers Views: 12239 221 ratings Time: 52:29 More in Science & Technology

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Friday, September 07, 2012

    CodePlex Daily Summary for Friday, September 07, 2012Popular ReleasesUmbraco CMS: Umbraco 4.9.0: Whats newThe media section has been overhauled to support HTML5 uploads, just drag and drop files in, even multiple files are supported on any HTML5 capable browser. The folder content overview is also much improved allowing you to filter it and perform common actions on your media items. The Rich Text Editor’s “Media” button now uses an embedder based on the open oEmbed standard (if you’re upgrading, enable the media button in the Rich Text Editor datatype settings and set TidyEditorConten...menu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.B INI Sharp Library: B INI Sharp Library v1.0.0.0 Final Realsed: The frist realsedActive Social Migrator: ActiveSocialMigrator 1.0.0 Beta: Beta release for the Active Social Migration tool.EntLib.com????????: ??????demo??-For SQL 2005-2008: EntLibShopping ???v3.0 - ??????demo??,?????SQL SERVER 2005/2008/2008 R2/2012 ??????。 ??(??)??????。 THANKS.Sistem LPK Pemkot Semarang: Panduan Penggunaan Sistem LPK: Panduan cara menggunakan Aplikasi Sistem LPK Bagian Pembangunan Kota SemarangActive Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.New ProjectsAdding 2013 Jewish Holidays for Outlook2003: Instruction: Copy the outlook.hol file to your compuer where Outlook2003 is installed. Double click the file, choose "Israel" and continue. That's it Agilcont System: Sistema de contabilidad para empresas privadas de preferencia para cajas que trabajan con efectivo en soles, dolares y con el material oroARB (A Request Broker): The idea is something like a Request Broker, but with some additional functionality.BATTLE.NET - SDK: This SDK provides the ability to use the Battle.net (Blizzard) Services for all supported Games such Diablo 3, World of Warcraft. Container Terminal System: SummaryDeclarative UX Streaming Data Language for the Cloud: Bringing a better communication paradigm for media and data..Get User Profile Information from SharePoint UserProfile Service: Used SharePoint object model to get the user profile information from the User Profile Service.Guess The City & State Windows 8 Source Code: Source code for Guess The City & State in Malaysia Windows 8 AppJquery Tree: This project is to demonstrate tree basic functionality.MCEBuddy 2.x: Convert and Remove Commercials for your Windows Media CenterMvcDesign: MvcDesign engine implementation projectMy Task Manager: This is a task manager module for DotNetNuke. I am using it to get started developing modules.MyAppwithbranches: MyAppwithbranchesProjecte prova: rpyGEO: pyGEO is a python package capable of parsing microarray data files. It also has a primitive plotting function.Scarlet Road: Scarlet Road is a top-down shooter. It's you against an unending horde of monsters.simplecounter: A simple counter, cick and counter.SiteEmpires: ????????Soundcloud Loader: Simple Tool for downloading Tracks from Soundcloud.Windows Phone Samples: Windows Phone code samples.

    Read the article

  • Automating an SSRS 2008 R2 Report Snapshots and run report with most recent data

    - by Mr Shoubs
    I would like to automate a report snapshot, but there is only an option to take a snapshot in the Report History Tab. All the resources I've found suggest I need to go to processing options and select "Render this report from a snapshot". But I don't want to do that - when I go to a report, I want to get the most recent data. However daily at midnight I'd like to take a snapshot and store it in the history in case I want to compare the reports as of midnight for the last few weeks. Or am I doing this wrong and have to create a subscription instead? Note: this is for an auditing database and has way to much data in to query a range with more than 1 day in it - reports are restricted as such. (1 day has over 1 million rows on it's own).

    Read the article

  • Setting WMI permissions remotely

    - by christianlinnell
    I've developed a tool that does a simple retrieval of registered services and installed applications from remote Windows Server 2003 servers via WMI. My problem is, the tool needs to be run on an ad hoc basis by a user who is not an administrator of those servers. I've created a domain user (which the tool will use to run the query) that I'd like to grant remote WMI permission on each server, but given there are about 200 servers, I can't do it manually. Is there a way to grant access to that domain user via WMI, or by distributing a registry change via SMS or Group Policy?

    Read the article

  • SYS2 Scripts Updated – Scripts to monitor database backup, database space usage and memory grants now available

    - by Davide Mauri
    I’ve just released three new scripts of my “sys2” script collection that can be found on CodePlex: Project Page: http://sys2dmvs.codeplex.com/ Source Code Download: http://sys2dmvs.codeplex.com/SourceControl/changeset/view/57732 The three new scripts are the following sys2.database_backup_info.sql sys2.query_memory_grants.sql sys2.stp_get_databases_space_used_info.sql Here’s some more details: database_backup_info This script has been made to quickly check if and when backup was done. It will report the last full, differential and log backup date and time for each database. Along with these information you’ll also get some additional metadata that shows if a database is a read-only database and its recovery model: By default it will check only the last seven days, but you can change this value just specifying how many days back you want to check. To analyze the last seven days, and list only the database with FULL recovery model without a log backup select * from sys2.databases_backup_info(default) where recovery_model = 3 and log_backup = 0 To analyze the last fifteen days, and list only the database with FULL recovery model with a differential backup select * from sys2.databases_backup_info(15) where recovery_model = 3 and diff_backup = 1 I just love this script, I use it every time I need to check that backups are not too old and that t-log backup are correctly scheduled. query_memory_grants This is just a wrapper around sys.dm_exec_query_memory_grants that enriches the default result set with the text of the query for which memory has been granted or is waiting for a memory grant and, optionally, its execution plan stp_get_databases_space_used_info This is a stored procedure that list all the available databases and for each one the overall size, the used space within that size, the maximum size it may reach and the auto grow options. This is another script I use every day in order to be able to monitor, track and forecast database space usage. As usual feedbacks and suggestions are more than welcome!

    Read the article

  • Dynamic LINQ in an Assembly Near By

    - by Ricardo Peres
    You may recall my post on Dynamic LINQ. I said then that you had to download Microsoft's samples and compile the DynamicQuery project (or just grab my copy), but there's another way. It turns out Microsoft included the Dynamic LINQ classes in the System.Web.Extensions assembly, not the one from ASP.NET 2.0, but the one that was included with ASP.NET 3.5! The only problem is that all types are private: Here's how to use it: Assembly asm = typeof(UpdatePanel).Assembly; Type dynamicExpressionType = asm.GetType("System.Web.Query.Dynamic.DynamicExpression"); MethodInfo parseLambdaMethod = dynamicExpressionType.GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = (m.Name == "ParseLambda") && (m.GetParameters().Length == 2)).Single().MakeGenericMethod(typeof(DateTime), typeof(Boolean)); Func filterExpression = (parseLambdaMethod.Invoke(null, new Object [] { "Year == 2010", new Object [ 0 ] }) as Expression).Compile(); List list = new List { new DateTime(2010, 1, 1), new DateTime(1999, 1, 12), new DateTime(1900, 10, 10), new DateTime(1900, 2, 20), new DateTime(2012, 5, 5), new DateTime(2012, 1, 20) }; IEnumerable filteredDates = list.Where(filterExpression); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Re-running SSRS subscription jobs that have failed

    - by Rob Farley
    Sometimes, an SSRS subscription for some reason. It can be annoying, particularly as the appropriate response can be hard to see immediately. There may be a long list of jobs that failed one morning if a Mail Server is down, and trying to work out a way of running each one again can be painful. It’s almost an argument for using shared schedules a lot, but the problem with this is that there are bound to be other things on that shared schedule that you wouldn’t want to be re-run. Luckily, there’s a table in the ReportServer database called dbo.Subscriptions, which is where LastStatus of the Subscription is stored. Having found the subscriptions that you’re interested in, finding the SQL Agent Jobs that correspond to them can be frustrating. Luckily, the jobstep command contains the subscriptionid, so it’s possible to look them up based on that. And of course, once the jobs have been found, they can be executed easily enough. In this example, I produce a list of the commands to run the jobs. I can copy the results out and execute them. select 'exec sp_start_job @job_name = ''' + cast(j.name as varchar(40)) + '''' from msdb.dbo.sysjobs j  join  msdb.dbo.sysjobsteps js on js.job_id = j.job_id join  [ReportServer].[dbo].[Subscriptions] s  on js.command like '%' + cast(s.subscriptionid as varchar(40)) + '%' where s.LastStatus like 'Failure sending mail%'; Another option could be to return the job step commands directly (js.command in this query), but my preference is to run the job that contains the step. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Help with Ms Access 2007 Combo boxes

    - by Yaaqov
    What's the most efficient way to "chain" combo/boxes in an Access 2007 form, so that the result of the first affected the contents of the second? I already know how to associate a combo box on a form with a query. Here's a example of my scenario: cmbCarMake Behavior: User starts typing, and list shows all manufacturers in a table starting with those characters (e.g., "Ford") cmbCarModel Behavior: Once cmbCarMake has a selected a Make, this object will limit the possible models the user can search for by only displaying models from that one manufacturer. (e.g., "F-150") Thank you for any examples/links.

    Read the article

  • Pagination, Duplicate Content, and SEO

    - by Iamtotallylost
    Please consider a list of items (forum comments, articles, shoes, doesn't matter) which are spread over multiple pages. Different sort orders are supported (by date, by popularity, by price, etc). So, an URL might look like this (I use the query style here to simplify things): /items?id=1234&page=42&sort=popularity /items?id=1234&page=5&sort=date Now, in terms of SEO, I think I should be worried about duplicate content. After all, each item appears at least as many times as there are sort orders. I've seen Matt Cutts talking about the rel=canonical link tag, but he also said that the canonical page should have very similar content. But this is not the case here because page #1 in a non-canonical sort order might have completely different items than page #1 in the canonical sort order. For a given non-canonical page, there is no clear canonical page listing all the same items, so I think rel=canonical won't help here. Then I thought about using the noindex meta tag on all pages with non-canonical sort order, and not using it on all pages with canonical sort order. However, if I use that method, what will happen with backlinks that are going to non-canonical pages -- will they still spread their page rank juice, even though the first page googlebot (or any other crawler) is going to encounter is marked as "noindex"? Can you please comment on my problem and what you think is the best solution? If you think you have a better solution, please consider that 1) I do not want to use Javascript for this, 2) I do not want all the items to be on one page. Thank you.

    Read the article

  • How to set _optimizer_search_limit and _optimizer_max_permutations in Oracle10g.

    - by user52856
    I am working on a product that must support both MSSQL and Oracle (10g and 11g). I have some very complex queries that seem to run without issue on MSSQL 2005/2008, but very, very slow with Oracle. The CPU on the oracle server skyrockets for long periods of time, and it seems like the optimizer may be trying to find the best execution plan for the very complex query. I did some Googling to figure out how to limit the amount of time the optimizer spends on this, and came up with _optimizer_search_limit and _optimizer_max_permutations. Both of these parameters are hidden in Oracle 10g, and setting them in init.ora doesn't seem to make any difference. How do I set these parameters in Oracle. Or am I just totally barking up the wrong tree with the assumption that the optimizer is spending several minutes finding an execution plan? Thanks.

    Read the article

  • Optimize SUMMARIZE with ADDCOLUMNS in Dax #ssas #tabular #dax #powerpivot

    - by Marco Russo (SQLBI)
    If you started using DAX as a query language, you might have encountered some performance issues by using SUMMARIZE. The problem is related to the calculation you put in the SUMMARIZE, by adding what are called extension columns, which compute their value within a filter context defined by the rows considered in the group that the SUMMARIZE uses to produce each row in the output. Most of the time, for simple table expressions used in the first parameter of SUMMARIZE, you can optimize performance by removing the extended columns from the SUMMARIZE and adding them by using an ADDCOLUMNS function. In practice, instead of writing SUMMARIZE( <table>, <group_by_column>, <column_name>, <expression> ) you can write: ADDCOLUMNS(     SUMMARIZE( <table>, <group by column> ),     <column_name>, CALCULATE( <expression> ) ) The performance difference might be huge (orders of magnitude) but this optimization might produce a different semantic and in these cases it should not be used. A longer discussion of this topic is included in my Best Practices Using SUMMARIZE and ADDCOLUMNS article on SQLBI, which also include several details about the DAX syntax with extended columns. For example, did you know that you can create an extended column in SUMMARIZE and ADDCOLUMNS with the same name of existing measures? It is *not* a good thing to do, and by reading the article you will discover why. Enjoy DAX!

    Read the article

  • Oracle annuncia la nuova release di Oracle Hyperion EPM System

    - by Stefano Oddone
    Lo scorso 4 Aprile, durante l'Oracle Open World tenutosi a Tokyo, Mark Hurd, Presidente di Oracle, ha annunciato l'imminente rilascio della release 11.1.2.2 di Oracle Hyperion Enterprise Performance Managent System, la piattaforma leader nel mercato mondiale dell'EPM. La nuova release introduce un insieme estremamente significativo di nuovi moduli, migliorie a moduli esistenti, evoluzioni tecnologiche e funzionali che incrementano ulteriormente il valore ed il vantaggio competitivo fornito dall'offerta Oracle. Tra le principali novità in evidenza: introduzione del nuovo modulo Oracle Hyperion Project Financial Planning, verticalizzazione per la pianificazione economico-finanziaria, il funding ed il budgeting di progetti, iniziative, attività, commesse arricchimento di Oracle Hyperion Planning con funzionalità built-in a supporto del Predictive Planning e del Rolling Forecast per supportare processi di budgeting e forecasting sempre più flessibili, frequenti ed efficaci introduzione del nuovo modulo Oracle Account Reconciliation Manager per la gestione dell'intero ciclo di vita delle attività di riconciliazione dei conti tra General Ledger e Sub-Ledger o tra sistemi contabili differenti arricchimento di Oracle Hyperion Financial Management con un'interfaccia web totalmente nuova e l'introduzione della Smart Dimensionality, ovvero la possibilità di definire modelli con più delle 12 dimensioni "canoniche" tipiche delle releases precedenti, con una gestione ottimizzata di query e calcoli in funzione della cardinalità delle dimensioni in gioco arricchimento di Oracle Hyperion Profitability & Cost Management con funzionalità di Detailed Profitability, ovvero la possibilità di implementare modelli di costing e profittabilità in presenza di dimensioni ad altissima cardinalità quali, ad esempio, gli SKU delle industrie Retail e Distribution, i clienti delle Banche Retail e delle Telco, le singole utente delle Utilities. arricchimento di Oracle Hyperion Financial Data Quality Management, in particolare della componente ERP Integrator, con estensione delle integrazioni pre-built verso SAP Financials e JD Edwards Enterprise One Financials introduzione di Oracle Exalytics, il primo engineered system specificatamente progettato per l'In-Memory Analytics che permette di ottenere performance di calcolo e di analisi senza precedenti al crescere dei volumi di dati, delle dimensioni dei modelli e della concorrenza degli utenti, supportando così processi di Business Intelligence, Planning & Budgeting, Cost Allocation sempre più articolati e distribuiti Il prossimo 19 Aprile nella sede Oracle di Cinisello Balsamo (MI) si terrà un evento dove verranno presentate in dettaglio le novità introdotte dalla nuova release dell'EPM System; l'evento sarà replicato il 3 Maggio nella sede Oracle di Roma. L'evento è pubblico e gratuito, chi fosse interessato può registrarsi qui. Per ulteriori informazioni potete fare riferimento alla Press Release Ufficiale Qui potete rivedere l'intervento di Mark Hurd all'Open World sulla Strategia Oracle per il Business Analytics

    Read the article

  • Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)

    - by Bakhtiyor
    I have mailserver configure using dovecot+postfix+mysql and it was runnig fine in the server(Ubuntu Server). But during last week it stopped working correctly. It doesn't send email. When I try to telnet localhost smtp I'm connecting successfully but when I do mail from:<[email protected]> and hit Enter it hangs on, nothing happen. Having reviewed /var/log/mail.log file I've found out that probably(99%) the problem is on postfix when it is trying to connect to MySQL server. If you see the log file given below you can see that it says Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). Nov 14 21:54:36 ns1 dovecot: dovecot: Killed with signal 15 (by pid=7731 uid=0 code=kill) Nov 14 21:54:36 ns1 dovecot: Dovecot v1.2.9 starting up (core dumps disabled) Nov 14 21:54:36 ns1 dovecot: auth-worker(default): mysql: Connected to localhost (mailserver) Nov 14 21:54:44 ns1 postfix/postfix-script[7753]: refreshing the Postfix mail system Nov 14 21:54:44 ns1 postfix/master[1670]: reload -- version 2.7.0, configuration /etc/postfix Nov 14 21:54:52 ns1 postfix/trivial-rewrite[7759]: warning: connect to mysql server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Nov 14 21:54:52 ns1 postfix/trivial-rewrite[7759]: fatal: mysql:/etc/postfix/mysql-virtual-alias-maps.cf(0,lock|fold_fix): table lookup problem Nov 14 21:54:53 ns1 postfix/master[1670]: warning: process /usr/lib/postfix/trivial-rewrite pid 7759 exit status 1 Nov 14 21:54:53 ns1 postfix/cleanup[7397]: warning: problem talking to service rewrite: Connection reset by peer Nov 14 21:54:53 ns1 postfix/master[1670]: warning: /usr/lib/postfix/trivial-rewrite: bad command startup -- throttling Nov 14 21:54:53 ns1 postfix/smtpd[7071]: warning: problem talking to service rewrite: Success I tried netstat -ln | grep mysql and it returns unix 2 [ ACC ] STREAM LISTENING 5817 /var/run/mysqld/mysqld.sock. The content of /etc/postfix/mysql-virtual-alias-maps.cf file is here: user = stevejobs password = apple hosts = localhost dbname = mailserver query = SELECT destination FROM virtual_aliases WHERE source='%s' Here I tried to change hosts = 127.0.0.1 but it says warning: connect to mysql server 127.0.0.1: Can't connect to MySQL server on '127.0.0.1' (110) So, I am lost and don't know where else to change in order to solve the problem. Any help would be appreciated highly. Thank you.

    Read the article

  • MVC Automatic Menu

    - by Nuri Halperin
    An ex-colleague of mine used to call his SQL script generator "Super-Scriptmatic 2000". It impressed our then boss little, but was fun to say and use. We called every batch job and script "something 2000" from that day on. I'm tempted to call this one Menu-Matic 2000, except it's waaaay past 2000. Oh well. The problem: I'm developing a bunch of stuff in MVC. There's no PM to generate mounds of requirements and there's no Ux Architect to create wireframe. During development, things change. Specifically, actions get renamed, moved from controller x to y etc. Well, as the site grows, it becomes a major pain to keep a static menu up to date, because the links change. The HtmlHelper doesn't live up to it's name and provides little help. How do I keep this growing list of pesky little forgotten actions reigned in? The general plan is: Decorate every action you want as a menu item with a custom attribute Reflect out all menu items into a structure at load time Render the menu using as CSS  friendly <ul><li> HTML. The MvcMenuItemAttribute decorates an action, designating it to be included as a menu item: [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public class MvcMenuItemAttribute : Attribute {   public string MenuText { get; set; }   public int Order { get; set; }   public string ParentLink { get; set; }   internal string Controller { get; set; }   internal string Action { get; set; }     #region ctor   public MvcMenuItemAttribute(string menuText) : this(menuText, 0) { } public MvcMenuItemAttribute(string menuText, int order) { MenuText = menuText; Order = order; }       internal string Link { get { return string.Format("/{0}/{1}", Controller, this.Action); } }   internal MvcMenuItemAttribute ParentItem { get; set; } #endregion } The MenuText allows overriding the text displayed on the menu. The Order allows the items to be ordered. The ParentLink allows you to make this item a child of another menu item. An example action could then be decorated thusly: [MvcMenuItem("Tracks", Order = 20, ParentLink = "/Session/Index")] . All pretty straightforward methinks. The challenge with menu hierarchy becomes fairly apparent when you try to render a menu and highlight the "current" item or render a breadcrumb control. Both encounter an  ambiguity if you allow a data source to have more than one menu item with the same URL link. The issue is that there is no great way to tell which link a person click. Using referring URL will fail if a user bookmarked the page. Using some extra query string to disambiguate duplicate URLs essentially changes the links, and also ads a chance of collision with other query parameters. Besides, that smells. The stock ASP.Net sitemap provider simply disallows duplicate URLS. I decided not to, and simply pick the first one encountered as the "current". Although it doesn't solve the issue completely – one might say they wanted the second of the 2 links to be "current"- it allows one to include a link twice (home->deals and products->deals etc), and the logic of deciding "current" is easy enough to explain to the customer. Now that we got that out of the way, let's build the menu data structure: public static List<MvcMenuItemAttribute> ListMenuItems(Assembly assembly) { var result = new List<MvcMenuItemAttribute>(); foreach (var type in assembly.GetTypes()) { if (!type.IsSubclassOf(typeof(Controller))) { continue; } foreach (var method in type.GetMethods()) { var items = method.GetCustomAttributes(typeof(MvcMenuItemAttribute), false) as MvcMenuItemAttribute[]; if (items == null) { continue; } foreach (var item in items) { if (String.IsNullOrEmpty(item.Controller)) { item.Controller = type.Name.Substring(0, type.Name.Length - "Controller".Length); } if (String.IsNullOrEmpty(item.Action)) { item.Action = method.Name; } result.Add(item); } } } return result.OrderBy(i => i.Order).ToList(); } Using reflection, the ListMenuItems method takes an assembly (you will hand it your MVC web assembly) and generates a list of menu items. It digs up all the types, and for each one that is an MVC Controller, digs up the methods. Methods decorated with the MvcMenuItemAttribute get plucked and added to the output list. Again, pretty simple. To make the structure hierarchical, a LINQ expression matches up all the items to their parent: public static void RegisterMenuItems(List<MvcMenuItemAttribute> items) { _MenuItems = items; _MenuItems.ForEach(i => i.ParentItem = items.FirstOrDefault(p => String.Equals(p.Link, i.ParentLink, StringComparison.InvariantCultureIgnoreCase))); } The _MenuItems is simply an internal list to keep things around for later rendering. Finally, to package the menu building for easy consumption: public static void RegisterMenuItems(Type mvcApplicationType) { RegisterMenuItems(ListMenuItems(Assembly.GetAssembly(mvcApplicationType))); } To bring this puppy home, a call in Global.asax.cs Application_Start() registers the menu. Notice the ugliness of reflection is tucked away from the innocent developer. All they have to do is call the RegisterMenuItems() and pass in the type of the application. When you use the new project template, global.asax declares a class public class MvcApplication : HttpApplication and that is why the Register call passes in that type. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes);   MvcMenu.RegisterMenuItems(typeof(MvcApplication)); }   What else is left to do? Oh, right, render! public static void ShowMenu(this TextWriter output) { var writer = new HtmlTextWriter(output);   renderHierarchy(writer, _MenuItems, null); }   public static void ShowBreadCrumb(this TextWriter output, Uri currentUri) { var writer = new HtmlTextWriter(output); string currentLink = "/" + currentUri.GetComponents(UriComponents.Path, UriFormat.Unescaped);   var menuItem = _MenuItems.FirstOrDefault(m => m.Link.Equals(currentLink, StringComparison.CurrentCultureIgnoreCase)); if (menuItem != null) { renderBreadCrumb(writer, _MenuItems, menuItem); } }   private static void renderBreadCrumb(HtmlTextWriter writer, List<MvcMenuItemAttribute> menuItems, MvcMenuItemAttribute current) { if (current == null) { return; } var parent = current.ParentItem; renderBreadCrumb(writer, menuItems, parent); writer.Write(current.MenuText); writer.Write(" / ");   }     static void renderHierarchy(HtmlTextWriter writer, List<MvcMenuItemAttribute> hierarchy, MvcMenuItemAttribute root) { if (!hierarchy.Any(i => i.ParentItem == root)) return;   writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (var current in hierarchy.Where(element => element.ParentItem == root).OrderBy(i => i.Order)) { if (ItemFilter == null || ItemFilter(current)) {   writer.RenderBeginTag(HtmlTextWriterTag.Li); writer.AddAttribute(HtmlTextWriterAttribute.Href, current.Link); writer.AddAttribute(HtmlTextWriterAttribute.Alt, current.MenuText); writer.RenderBeginTag(HtmlTextWriterTag.A); writer.WriteEncodedText(current.MenuText); writer.RenderEndTag(); // link renderHierarchy(writer, hierarchy, current); writer.RenderEndTag(); // li } } writer.RenderEndTag(); // ul } The ShowMenu method renders the menu out to the provided TextWriter. In previous posts I've discussed my partiality to using well debugged, time test HtmlTextWriter to render HTML rather than writing out angled brackets by hand. In addition, writing out using the actual writer on the actual stream rather than generating string and byte intermediaries (yes, StringBuilder being no exception) disturbs me. To carry out the rendering of an hierarchical menu, the recursive renderHierarchy() is used. You may notice that an ItemFilter is called before rendering each item. I figured that at some point one might want to exclude certain items from the menu based on security role or context or something. That delegate is the hook for such future feature. To carry out rendering of a breadcrumb recursion is used again, this time simply to unwind the parent hierarchy from the leaf node, then rendering on the return from the recursion rather than as we go along deeper. I guess I was stuck in LISP that day.. recursion is fun though.   Now all that is left is some usage! Open your Site.Master or wherever you'd like to place a menu or breadcrumb, and plant one of these calls: <% MvcMenu.ShowBreadCrumb(this.Writer, Request.Url); %> to show a breadcrumb trail (notice lack of "=" after <% and the semicolon). <% MvcMenu.ShowMenu(Writer); %> to show the menu.   As mentioned before, the HTML output is nested <UL> <LI> tags, which should make it easy to style using abundant CSS to produce anything from static horizontal or vertical to dynamic drop-downs.   This has been quite a fun little implementation and I was pleased that the code size remained low. The main crux was figuring out how to pass parent information from the attribute to the hierarchy builder because attributes have restricted parameter types. Once I settled on that implementation, the rest falls into place quite easily.

    Read the article

  • So Singletons are bad, then what?

    - by Bobby Tables
    There has been a lot of discussion lately about the problems with using (and overusing) Singletons. I've been one of those people earlier in my career too. I can see what the problem is now, and yet, there are still many cases where I can't see a nice alternative - and not many of the anti-Singleton discussions really provide one. Here is a real example from a major recent project I was involved in: The application was a thick client with many separate screens and components which uses huge amounts of data from a server state which isn't updated too often. This data was basically cached in a Singleton "manager" object - the dreaded "global state". The idea was to have this one place in the app which keeps the data stored and synced, and then any new screens that are opened can just query most of what they need from there, without making repetitive requests for various supporting data from the server. Constantly requesting to the server would take too much bandwidth - and I'm talking thousands of dollars extra Internet bills per week, so that was unacceptable. Is there any other approach that could be appropriate here than basically having this kind of global data manager cache object? This object doesn't officially have to be a "Singleton" of course, but it does conceptually make sense to be one. What is a nice clean alternative here?

    Read the article

  • MySQL, replacing &nbsp; with... nothing (delete those, please!)

    - by javipas
    I'm trying to purge my WordPress content from "false" carriage return (CR). These are caused after a migration of my content, that now presents from time to time a &nbsp; code that makes the web rendering engine to "paint" a CR where I would like to be nothing. The paragraphs seem to have a double CR because of this, and look too far apart. I'd like to be able to make a MySQL query in order to get rid of that strings, but at the moment I haven't found the key. What I've tried is UPDATE wp_posts set post_content = replace (post_content,'&nbsp;',' '); But i get <p> </p> where before were the &nbsp; strings. This seems not the answer at all. Could it have to be with the ampersand, and in that case, should I use something like &amp;nbsp; or something similar?

    Read the article

  • Into Orbit (OBIEE 11g Launch)

    - by Darryn.Hinett
    After much anticipation, it appears that OBIEE 11g is about to hit the streets. Join Charles Phillips, President, and Thomas Kurian, Executive Vice President, Product Development, for the launch of the latest release of Oracle's business intelligence software. Be the first to hear about Oracle Business Intelligence Enterprise Edition 11g, the new, industry-leading technology platform for business intelligence, which offers: A powerful end-user experience with rich visualisation, search, and actionable collaboration Advancements in analytics, OLAP, and enterprise reporting, with unmatched performance and scalability Simplified system configuration, life-cycle management, and performance optimisation As well as the keynote and technical general session, break out sessions will cover the following topics: Business Intelligence: From Insight to Action In this session, you will learn about an exciting, industry-first innovation that connects business intelligence directly to your business processes. You can spot an opportunity or issue, and immediately initiate appropriate action directly from your dashboard. Oracle Business Intelligence Enterprise Edition 11g Systems Management and Deployment Learn how you can streamline the process of configuring your system, provisioning users, and monitoring and optimising query performance. Attend this session to hear how new integration with Oracle Enterprise Manager provides unique systems management, superior scalability, and high availability and security benefits, while making upgrades effortless. Extending Business Intelligence Analytics with Online Analytical Processing (OLAP) Learn how you can enhance the analytical power and business value of your BI solution with a unified environment for navigating and querying both OLAP and relational data sources. This session will focus on how Oracle Business Intelligence Enterprise Edition 11g, used with Oracle Essbase, can deliver insight at the speed of thought. Integrated Performance Management If your organisation is using or considering performance management applications such as Oracle's Hyperion Planning and Hyperion Financial Management, you will not want to miss this session. See how you can leverage Oracle's BI solution for accessing performance management applications and performing extended financial reporting and analysis. Visualisation and End-user Experience The latest release of Oracle Business Intelligence provides an unrivaled end user experience, including rich interactive dashboards, a vast range of animated charting options, integrated search, and more. This session will also include a close look at how you can leverage location data to visualise geo-spatial information.

    Read the article

  • lsass.exe memory leak on windows 2003 server

    - by thelsdj
    In the past month or so I noticed that lsass.exe has started to leak memory, getting to 500MB+ of ram in under a week after reboot. Before this I had never noticed it using any significant amount of memory compared to other processes on the system. This is happening on 2 identical servers, neither of which has anything to do with Active Directory. Maybe a recent Windows Update has caused this? Any thoughts on things to check? As a side question is there some way to recycle the memory usage of lsass.exe without rebooting? Edit: Here is what I'm seeing in Process Monitor, there are thousands of registry open/query/close a minute from lsass.exe. How can I track down what is triggering these?

    Read the article

  • Silverlight Cream for May 30, 2010 -- #873

    - by Dave Campbell
    In this Issue: Matthias Shapiro, Colin Blair(-2-), Mike Snow, Marlon Grech, Victor Gaudioso. Shoutout: If you're going to be anywhere near Mission Viejo, California on June 19th, set your calendar for this Victor Gaudioso event: New Speaking Event: Microsoft Book Signing/Silverlight 4 Presentation SilverLaw has another example of his Flexible surface app up: Drag & Drop Flexible Surface - Silverlight 4 From SilverlightCream.com: Silverlight 4 Binding and StringFormat in XAML Matthias Shapiro has a discussion posted about StringFormat binding in Silverlight 4 ... he dug in hard on this... well worth a read. View Model Collection Properties for WCF RIA Services Colin Blair is discussing some possibilities for exposing collections of entities from the ViewModel... his favorite: PagedCollectionView. The next post discusses this deeper. Advanced Paged Collection View Colin Blair continues in more depth on the PagedCollectionView, this time handling paging, sorting, and multiple loads. Silverlight Tip of the day #25 – Detecting Validation Errors on Submit Mike Snow's latest Tip of the Day is up and is about validation - specifically validating after your user has pressed "OK" INotifyPropertyChanged… I am fed up of handling events just to know when a property changed Marlon Grech has an Rx-less solution to code notifications of properties changing... this is a WPF and Silverlight solution and all the code is downloadable. New Silverlight Video Tutorial: How to Add Multiple BitmapEffects to One Object Victor Gaudioso's latest outing is in response to a query from a reader and is a video tutorial showing how to add multiple bitmap effects to one object. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • HUGE EF4 Inheritance Bug

    - by djsolid
    Well maybe not for everyone but for me is definitely really important. That is why I get straight into the point. We have the following model: Which maps to the following database: We are using EF4.0 and we want to load all Burgers including BurgerDetails. So we write the following query: But it fails. The error is : “The ResultType of the specified expression is not compatible with the required type. The expression ResultType is 'Transient.reference[SampleEFDBModel.Food]' but the required type is 'Transient.reference[SampleEFDBModel.Burger]'.Parameter name: arguments[0]”   So in the new version of EF there is no way to eager load data through Navigation Properties with 1-1 relationships defined in subclasses. Here is the relevant Microsoft Connect Issue. It is described through an other example but the result is the same.  Please if you think this is important vote up on Microsoft Connect.   EF 4.0 has many improvements. I am using it since v1 in large-scale projects and this version is faster,produces cleaner sql, more reliable and can be used for complicated business scenarios. That is why I believe this issue should be solved as soon as possible. I understand that release cycles are slow but I am hoping atleast for a hotfix. I also have uploaded the example project so you can test it. Download it from here. If anyone has found any workarounds please post it in the comments section. Thanks!

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • Methodology behind fetching large XML data sets in pieces

    - by Jerry Dodge
    I am working on an HTTP Server in Delphi which simply sends back a custom XML dataset. I am not following any type of standard formatting, such as SOAP. I have the system working seamlessly, except one small flaw: When I have a very large dataset to send back to the client, it might take up to 2 minutes for all the data to be transferred. The HTTP Server I'm building is essentially an XML Data based API around a database, implementing the common business rule - therefore, the requests are specific to the data behind the system. When, for example, I fetch a large set of product data, I would like to break this down and send it back piece by piece. However, a single HTTP request calls for a single response. I can't necessarily keep feeding the client with multiple different XML packets unless the client explicitly requests it. I don't have any session management, but rather an API Key. I know if I had sessions, I could keep-alive a dataset temporarily for a client, and they could request bits and pieces of it. However, without session management, I would have to execute the SQL query multiple times (for each chunk of data), and in the mean-time, if that data changes, the "pages" might get messed up, therefore causing items to show on the wrong pages, after navigating to a different page. So how is this commonly handled? What's the methodology behind breaking down a large XML dataset into chunks to save the load?

    Read the article

  • Oracle LogMiner (Unsupported Operation)

    - by Sarith
    Hi all, I'm currently using Oracle 11g on RHEL5. I'm digging to see what are in the archived log files. After I query from v$logmnr_contents, I see many transactions of UNSUPPORTED operation. What do these unsupported transactions mean? I think that it's the cause that make my database generates lots of archived logs. Moreover, I'm using global temporary table for generating reports. I discover that when I insert and delete from those temporary table, it also records in the archived log file. How to do to reduce those recorded transactions? Regards, Sarith

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >