Search Results

Search found 17583 results on 704 pages for 'query analyzer'.

Page 298/704 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • SQL Saturday 47 Phoenix February 2011

    - by billramo
    Today I presented data collection strategies for SQL Server 2008 at Phoenix SQL Saturday 47. I’ve attached my deck to this post so that you can get the links and references that I presented. To learn more about the Data Collector, check out these links. SQL Server 2008 Data Collector Proof of Concept – How to get started with Data Collector in your organization SQL Server Query Hash Statistics – Replacement for the shipping Query Statistics collection set Writing Reports Against the Management Data Warehouse – This is part 1 of the series on MDW reports. You can see all the articles through this link. Be sure to click on the attachment link to download the slides.

    Read the article

  • Eager Loading more than 1 table in LinqtoSql

    - by Michael Freidgeim
    When I've tried in Linq2Sql to load table with 2 child tables, I've noticed, that multiple SQLs are generated. I've found that  it isa known issue, if you try to specify more than one to pre-load it just  picks which one to pre-load and which others to leave deferred (simply ignoring those LoadWith hints)There are more explanations in http://codebetter.com/blogs/david.hayden/archive/2007/08/06/linq-to-sql-query-tuning-appears-to-break-down-in-more-advanced-scenarios.aspxThe reason the relationship in your blog post above is generating multiple queries is that you have two (1:n) relationship (Customers->Orders) and (Orders->OrderDetails). If you just had one (1:n) relationship (Customer->Orders) or (Orders->OrderDetails) LINQ to SQL would optimize and grab it in one query (using a JOIN).  The alternative -to use SQL and POCO classes-see http://stackoverflow.com/questions/238504/linq-to-sql-loading-child-entities-without-using-dataloadoptions?rq=1Fortunately the problem is not applicable to Entity Framework, that we want to use in future development instead of Linq2SqlProduct firstProduct = db.Product.Include("OrderDetail").Include("Supplier").First(); ?

    Read the article

  • Redehost Transforms Cloud & Hosting Services with MySQL Enterprise Edition

    - by Mat Keep
    RedeHost are one of Brazil's largest cloud computing and web hosting providers, with more than 60,000 customers and 52,000 web sites running on its infrastructure. As the company grew, Redehost needed to automate operations, such as system monitoring, making the operations team more proactive in solving problems. Redehost also sought to improve server uptime, robustness, and availability, especially during backup windows, when performance would often dip. To address the needs of the business, Redehost migrated from the community edition of MySQL to MySQL Enterprise Edition, which has delivered a host of benefits: - Pro-active database management and monitoring using MySQL Enterprise Monitor, enabling Redehost to fulfil customer SLAs. Using the Query Analyzer, Redehost were able to more rapidly identify slow queries, improving customer support - Quadrupled backup speed with MySQL Enterprise Backup, leading to faster data recovery and improved system availability - Reduced DBA overhead by 50% due to the improved support capabilities offered by MySQL Enterprise Edition. - Enabled infrastructure consolidation, avoiding unnecessary energy costs and premature hardware acquisition You can learn more from the full Redehost Case Study Also, take a look at the recently updated MySQL in the Cloud whitepaper for the latest developments that are making it even simpler and more efficient to develop and deploy new services with MySQL in the cloud

    Read the article

  • How should I design my database API commands? [closed]

    - by WebDev
    I am developing a database API for a project, with commands for getting data from the database. For example, I have one gib table, so the command for that is: getgib name alias limit fields If the user pass their name: getgib rahul Then it will return all gib data whose name is like rahul. If an alias is given then it will return all the gib owned by the user whose alias (userid) was given. I want to design the commands: limit: to limit the record in query, fields: extra fields I want to add in the select query. So now the commands are set, but: I want the gibs by the gibid, so how to make this or any suggestion to improve my command is welcome. If the user doesn't want to specify the name, and he wants only the gibs by providing alias, then what separator should I use instead of name?

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • CodePlex Daily Summary for Wednesday, July 04, 2012

    CodePlex Daily Summary for Wednesday, July 04, 2012Popular ReleasesMVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...BlackJumboDog: Ver5.6.6: 2012.07.03 Ver5.6.6 (1) ???????????ftp://?????????、????LIST?????Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...DynamicToSql: DynamicToSql 1.0.0 (beta): 1.0.0 beta versionCommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsnopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.60: Highlight features & improvements: • Significant performance optimization. • Use AJAX for adding products to the cart. • New flyout mini-shopping cart. • Auto complete suggestions for product searching. • Full-Text support. • EU cookie law support. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).THE NVL Maker: The NVL Maker Ver 3.51: http://download.codeplex.com/Download?ProjectName=nvlmaker&DownloadId=371510 ????:http://115.com/file/beoef05k#THE-NVL-Maker-ver3.51-sim.7z ????:http://www.mediafire.com/file/6tqdwj9jr6eb9qj/THENVLMakerver3.51tra.7z ======================================== ???? ======================================== 3.51 beta ???: ·?????????????????????? ·?????????,?????????0,?????????????????????? ·??????????????????????????? ·?????????????TJS????(EXP??) ·??4:3???,???????????????,??????????? ·?????????...????: ????2.0.3: 1、???????????。 2、????????。 3、????????????。 4、bug??,????。AssaultCube Reloaded: 2.5 Intrepid: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) You should delete /home/config/saved.cfg to reset binds/other stuff If you us...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.0: User Right Licensing ContentType version 2.0.267.1Bongiozzo Photosite: Alpha: Just first stable releaseMDS MODELING WORKBOOK: MDS MODELING WORKBOOK: This is the initial release. Works with SQL 2008 R2 Master Data Services. Also works with SQL 2012 Master Data Services but has not been completely tested.Logon Screen Launcher: Logon Screen Launcher 1.3.0: FIXED - Minor handle leak issueBF3Rcon.NET: BF3Rcon.NET 25.0: This update brings the library up to server release R25, which includes the few additions from R21. There are also some minor bug fixes and a couple of other minor changes. In addition, many methods now take advantage of the RconResult class, which will give error information on failed requests; this replaces the bool returned by many methods. There is also an implicit conversion from RconResult to bool (both of which were true on success), so old code shouldn't break. ChangesAdded Player.S...TelerikMvcGridCustomBindingHelper: Version 1.0.15.183-RC: TelerikMvcGridCustomBindingHelper 1.0.15.183 RC This is a RC (release candidate) version, please test and report any error or problem you encounter. Warning: There are many changes in this release and some of them break backward compatibility. Release notes (since 0.5.0-Alpha version): Custom aggregates via an inherited class or inline fluent function Ignore group on aggregates for better performance Projections (restriction of the database columns queried) for an even better performa...PunkBuster™ Screenshot Viewer: PunkBuster™ Screenshot Viewer 1.0: First release of PunkBuster™ Screenshot ViewerDesigning Windows 8 Applications with C# and XAML: Chapters 1 - 7 Release Preview: Source code for all examples from Chapters 1 - 7 for the Release PreviewNew ProjectsAzureMVC4: hiBoonCraft Launcher: BoonCraft Launcher V2.0 See http://352n.dyndns.org for more info on BoonCraftC# to Javascript: Have you ever wanted to automagically have access to the enums you use in your .NET code in the javascript code you're writing for client-side?CMCIC payment gateway provider for NB_Store: CMCIC payment gateway provider for NB_StoreCOFE2 : Cloud Over IFileSystemInfo Entries Extensions: COFE2 enable user to access the user-defined file system on local or foreign computer, using a System.IO-like interface or a RESTful Web API.Directory access via LDAP: .NET library for managing a directory via LDAP.E-mail processing: .NET library for processing e-mail.FAST Search for SharePoint Query Statistics: F4SP Query Statistics scans the FAST for SharePoint Query Logs and presents statistics based on the logs. Total Queries, Top Queries, Queries per user etc...File Backup: This project is an open source windows azure cloud backup win forms application.HanxiaoFu's personal: This will help synchronizing my work done in home and at workLifekostyuk: This is my first project on TFSNet WebSocket Server: NetWebSocket Server is c# based hight performance and scalable Websocket server. Posroid for Windows 8: ?? ??????? ????? ?? ?? ????? ??? ?? ??? ??? ???? ? ? ???, ???8??? ???? ?? ??? ? ? ??? ??? ????? ?????.PowerRules: PowerRules is a group of scripts that help you audit your farm for Configuration Drift (Configuration changes over time)Projet Niloc TETRAS: Student Project to know how to manage and coordinating a team.proyectobanco: PROFE AQUI ESTA EL PROYECTO DISCULPE NOMAS ATT SANCANsheetengine - Isometric HTML5 JavaScript Display Engine: Sheetengine is an HTML5 canvas based isometric display engine for JavaScript. It features textures, z-ordering, shadows, intersecting sheets, object movements.Shiny2: GTS Spoofing program for Generation IV and V of Pokemon.SMS Backup & Restore XML to MySQL: The purpose is to take the XML files created by SMS Backup and Restore (Android) and importing them via a Dropbox/Google Drive synch into a MySQL dbStundenplan TSST: App für Windows Phone um die einzelnen Vertretungspläne der technischen Schule Steinfurt anzusehenswalmacenamiento: Proyecto para el almacenamiento de registrosTFS Work Item Association Check-in Policy: This policy requires TFS source control check-ins to be associated with a single, in-progress task that is assigned to you.TurboTemplate: TurboTemplate is a fast source code generation helper which quick transforms between your SQL database and some templated text of your choice.visblog: this is short summary of my projectVisualHG_fliedonion: This is fork of VisualHG. This will used by improve VisualHG for me. support only Visual Studio 2008 (not SP1). Wave Tag Library: A very simple and modest .wav file tag library. With this library you can load .wav files, edit the tags (equivalent to mp3's ID3 tags) and save back to file.Wordpress: WordPress is web software you can use to create a beautiful website or blog. We like to say that WordPress is both free and priceless at the same time.ZEAL-C02 Bluetooth module Driver for Netduino: A class library for the .NET Micro Framework to support the Zeal-C02 Bluetooth module for Netduino.

    Read the article

  • It's intellisense for SQL Server

    - by Nick Harrison
    It's intellisense for SQL Server Anyone who has ever worked with me, heard me speak, or read any of writings knows that I am a HUGE fan of Reflector.    By extension,  I am a big fan of Red - Gate   I have recently begun exploring some of their other offerings and came across this jewel. SQL Prompt is a plug in for Visual Studio and SQL Server Management Studio.    It provides several tools to make dealing with SQL a little easier for your friendly neighborhood developer. When you a query window in a database, the plugin kicks in and gathers the metadata for the database that you are in.    As you type a query, you get handy feedback like a list of tables after you type select.    You can select one of the tables, specify * and then tab to expand the select clause to include all of the columns from the selected table.    As you are building up the where clause, you are prompted by the names of columns in the selected tables. If you spend any time writing ad hoc queries or building stored procedures by hand, this can save you substantial time. If you are learning a new data model, this can greatly cut down on your frustration level. The other really cool thing here is Format SQL.   I have searched all over the place for a really good SQL formatter.    Badly formatted  SQL is so much harder to read than well formatted SQL.   Unfortunately, management studio offers no support for keeping your SQL well formatted.    There are many tools available to format your SQL.   Some work better than others.    Some don't work that well at all.   Most will give you some measure of control over how the formatted SQL looks.    SQL Prompt produces good results and is easy to configure. Sadly no tool is perfect, and what would we be without a wish list.    There are some features that I would like to see: Make it easier to paste SQL in and out of code.    Strip off string builder, etc Automate replacing hard coded values with bind variables or parameters In addition to reformatting SQL, which is a huge refactor, support for other SQL refactors would be nice.    Convert join to sub query and vice versa come to mind Wish list a side, this is a wonderful tool that easily saves me an hour or more on most weeks.

    Read the article

  • WebCenter Content (WCC) Trace Sections

    - by Kevin Smith
    Kyle has a good post on how to modify the size and number of WebCenter Content (WCC) trace files. His post reminded me I have been meaning to write a post on WCC trace sections for a while. searchcache - Tells you if you query was found in the WCC search cache. searchquery - Shows the processing of the query as it is converted form what the user submitted to the end query that will be sent to the database. Shows conversion from the universal query syntax to the syntax specific to the search solution WCC is configured to use. services (verbose) - Lists the filters that are called for each service. This will let you know what filters are available for each service and will also tell you what filters are used by WCC add-on components and any custom components you have installed. The How To Component Sample has a list of filters, but it has not been updated since 7.5, so it is a little outdated now. With each new release WCC adds more filters. If you have a filter that has no code attached to it you will see output like this: services/6    09.25 06:40:26.270    IdcServer-423    Called filter event computeDocName with no filter plugins registered When a WCC add-on or custom component uses a filter you will see trace output like this: services/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionValidateCheckinData with parameter postValidateCheckinDataservices/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionFilters with parameter postValidateCheckinData As you can see from this sample output it is possible to have multiple code points using the same filter. systemdatabase - Dumps the database call AFTER it executes. This can be somewhat troublesome if you are trying to track down some weird database problems. We had a problem where WCC was getting into a deadlock situation. We turned on the systemdatabase trace section and thought we had the problem database call, but it turned out since it printed out the database call after it was executed we were looking at the database call BEFORE the one causing the deadlock. We ended up having to turn on tracing at the database level to see the database call WCC was making that was causing the deadlock. socketrequests (verbose) - dumps the actual messages received and sent over the socket connection by WCC for a service. If you have gzip enabled you will see junk on the response coming back from WCC. For debugging disable the gzip of the WCC response.Here is an example of the dump of the request for a GET_SEARCH_RESULTS service call. socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: REMOTE_USER=sysadmin.USER-AGENT=Java;.Stel socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: lent.CIS.11g.CONTENT_TYPE=text/html.HEADER socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: _ENCODING=UTF-8.REQUEST_METHOD=POST.CONTEN socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: T_LENGTH=270.HTTP_HOST=CIS.$$$$.NoHttpHead socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: ers=0.IsJava=1.IdcService=GET_SEARCH_RESUL socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: [email protected] socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: calData.SortField=dDocName.ClientEncoding= socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: UTF-8.IdcService=GET_SEARCH_RESULTS.UserTi socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: meZone=UTC.UserDateFormat=iso8601.SortDesc socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: =ASC.QueryText=dDocType..matches..`Documen socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: t`.@end. userstorage, jps - Provides trace details for user authentication and authorization. Includes information on the determination of what roles and accounts a user has access to. In 11g a new trace section, jps, was added with the addition of the JpsUserProvider to communicate with WebLogic Server. The WCC developers decide when to use the verbose option for their trace output, so sometime you need to try verbose to see what different information you get. One of the things I would always have liked to see if the ability to turn on verbose output selectively for individual trace sections. When you turn on verbose output you get it for all trace sections you have enabled. This can quickly fill up your trace files with a lot of information if you have the socket trace section turned on.

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 2 of 4

    - by Kelly Jones
    In my last post, I explained why we decided to use a Click Once application to solve our business problem. To quickly review, we needed a way for our business users to upload documents to a SharePoint 2007 document library in mass, set the meta data, set the permissions per document, and to do so easily. Let’s look at the pieces that make up our solution.  First, we have the Windows Form application.  This app is deployed using Click Once and calls SharePoint web services in order to upload files and then calls web services to set the meta data (SharePoint columns and permissions).  Second, we have a custom action.  The custom action is responsible for providing our users a link that will launch the Windows app, as well as passing values to it via the query string.  And lastly, we have the web services that the Windows Form application calls.  For our solution, we used both out of the box web services and a custom web service in order to set the column values in the document library as well as the permissions on the documents. Now, let’s look at the technical details of each of these pieces.  (All of the code is downloadable from here: )   Windows Form application deployed via Click Once The Windows Form application, called “Custom Upload”, has just a few classes in it: Custom Upload -- the form FileList.xsd -- the dataset used to track the names of the files and their meta data values SharePointUpload -- this class handles uploading the file SharePointUpload uses an HttpWebRequest to transfer the file to the web server. We had to change this code from a WebClient object to the HttpWebRequest object, because we needed to be able to set the time out value.  public bool UploadDocument(string localFilename, string remoteFilename) { bool result = true; //Need to use an HttpWebRequest object instead of a WebClient object // so we can set the timeout (WebClient doesn't allow you to set the timeout!) HttpWebRequest req = (HttpWebRequest)WebRequest.Create(remoteFilename); try { req.Method = "PUT"; req.Timeout = 60 * 1000; //convert seconds to milliseconds req.AllowWriteStreamBuffering = true; req.Credentials = System.Net.CredentialCache.DefaultCredentials; req.SendChunked = false; req.KeepAlive = true; Stream reqStream = req.GetRequestStream(); FileStream rdr = new FileStream(localFilename, FileMode.Open, FileAccess.Read); byte[] inData = new byte[4096]; int bytesRead = rdr.Read(inData, 0, inData.Length); while (bytesRead > 0) { reqStream.Write(inData, 0, bytesRead); bytesRead = rdr.Read(inData, 0, inData.Length); } reqStream.Close(); rdr.Close(); System.Net.HttpWebResponse response = (HttpWebResponse)req.GetResponse(); if (response.StatusCode != HttpStatusCode.OK && response.StatusCode != HttpStatusCode.Created) { String msg = String.Format("An error occurred while uploading this file: {0}\n\nError response code: {1}", System.IO.Path.GetFileName(localFilename), response.StatusCode.ToString()); LogWarning(msg, "2ACFFCCA-59BA-40c8-A9AB-05FA3331D223"); result = false; } } catch (Exception ex) { LogException(ex, "{E9D62A93-D298-470d-A6BA-19AAB237978A}"); result = false; } return result; } The class also contains the LogException() and LogWarning() methods. When the application is launched, it parses the query string for some initial values.  The query string looks like this: string queryString = "Srv=clickonce&Sec=N&Doc=DMI&SiteName=&Speed=128000&Max=50"; This Srv is the path to the server (my Virtual Machine is name “clickonce”), the Sec is short for security – meaning HTTPS or HTTP, the Doc is the shortcut for which document library to use, and SiteName is the name of the SharePoint site.  Speed is used to calculate an estimate for download speed for each file.  We added this so our users uploading documents would realize how long it might take for clients in remote locations (using slow WAN connections) to download the documents. The last value, Max, is the maximum size that the SharePoint site will allow documents to be.  This allowed us to give users a warning that a file is too large before we even attempt to upload it. Another critical piece is the meta data collection.  We organized our site using SharePoint content types, so when the app loads, it gets a list of the document library’s content types.  The user then select one of the content types from the drop down list, and then we query SharePoint to get a list of the fields that make up that content type.  We used both an out of the box web service, and one that we custom built, in order to get these values. Once we have the content type fields, we then add controls to the form.  Which type of control we add depends on the data type of the field.  (DateTime pickers for date/time fields, etc)  We didn’t write code to cover every data type, since we were working with a limited set of content types and field data types. Here’s a screen shot of the Form, before and after someone has selected the content types and our code has added the custom controls:     The other piece of meta data we collect is the in the upper right corner of the app, “Users with access”.  This box lists the different SharePoint Groups that we have set up and by checking the boxes, the user can set the permissions on the uploaded documents. All of this meta data is collected and submitted to our custom web service, which then sets the values on the documents on the list.  We’ll look at these web services in a future post. In the next post, we’ll walk through the Custom Action we built.

    Read the article

  • 500 error on Joolma website

    - by Rachel Sparks
    PHP Fatal error: Call to a member function setQuery() on a non-object in /home/josh/public_html/administrator/components/com_jfusion/plugins/phpbb3/forum.php on line 226 Just moved over to a new server. Anyone have ideas as to what is wrong? Is this a database issue? line 226: //get permissions for all forums in case more than one module/plugin is present with different settings $db = & JFusionFactory::getDatabase($this->getJname()); $query = "SELECT forum_id FROM #__forums WHERE forum_type = 1 ORDER BY left_id"; $db->setQuery($query); //226 $forumids = $db->loadResultArray();

    Read the article

  • Oracle Solaris Studio Express 6/10 and its Customer Feedback Program are now available

    - by pieter.humphrey
    Oracle Solaris Studio Express 6/10 and the Customer Feedback Program for it are now available. Oracle Solaris Studio Express 6/10 is available on Solaris 10 (SPARC, x86), OEL 5 (x86), RHEL 5 (x86), SuSE 11 (x86) today and will be available for OpenSolaris in the near future. New feature highlights since the last release include: C/C++/Fortran compiler optimizations for the latest UltraSPARC and SPARC64-based architectures such as UltraSPARC T2 and SPARC64 VII C/C++/Fortran compiler optimizations for the latest x86 architectures including the Intel Xeon 7500 processor series (Nehalem-EX) and the Intel Xeon 5600 processor series (Westmere-EP) Enhanced debugging and code coverage tooling Improved application profiling with the Performance Analyzer Updated IDE based on NetBeans 6.8 To find more information and download go to http://developers.sun.com/sunstudio/downloads/express/ To participate in the customer feedback program for Oracle Solaris Studio Express 6/10 go to http://developers.sun.com/sunstudio/customerfeedback/index.jsp Please get the word out, try out this new release and send us your feedback! Technorati Tags: developer,development,solaris,sparc,Oracle Solaris Studio,Solaris Studio,Sun Studio,oracle,otn del.icio.us Tags: developer,development,solaris,sparc,Oracle Solaris Studio,Solaris Studio,Sun Studio,oracle,otn

    Read the article

  • Missing Indexes DMV Report, 3 billion Impact!

    - by Tara Kizer
    We’ve been having some major performance issues with one of the applications that I support.  The database is on SQL Server 2005 and is about 150GB in size.  We’ve identified a couple of issues already on the database side.  The first issue is that some query (or maybe several queries) is getting a bad execution plan at some point in time during the day.  When it occurs, database performance comes to a grinding halt.  We know it’s a bad execution plan as running DBCC FREEPROCCACHE immediately resolves the problem system-wide.  As we have not yet identified the problematic query, we’ve put a temporary solution in place that frees the procedure cache on an hourly basis via a SQL Agent job.  This is not ideal, but it is getting us through the day without a major problem.  We are actively working on identifying the problematic query and hope to disable the SQL Agent job soon. Earlier this week, we had a major slowdown for one of the processes of this application.  I was unable to find any database performance issues, but I continued to investigate it.  One of things that I typically do when investigating database performance issues is run the “Missing Indexes DMV Report” (that’s what I call it at least).  When analyzing the output of that report, I immediately dismiss anything under 1 million “Impact” as I want to target the “low-hanging fruit” initially.  When I ran the report earlier this week, I was shocked to find a suggested index with an impact of over 3 billion! Do I win a prize for the highest impact?  Has anyone seen a value higher than mine?  My exact value was 3154284120.67765. The performance issue from earlier this week ended up being an application problem, but it also brought to light a much needed index.  I had previously seen this index come up in that report but always with a much lower impact.  I had never considered it as the index’s selectivity is very low.  It’s a composite index with three columns.  The first column is not selective, the first two columns are not selective, and the three columns together are not selective.  In fact, no matter how I order it, the index will not be selective at all.  I briefly discussed this with Kimberly Tripp, and she said that this was okay for covering indexes.  Selectivity is irrelevant for a covering index.  She indicated that she’s even created indexes with gender as the first column in the index.  I’ve got lots to learn still!

    Read the article

  • Favorite Visual Studio 2010 Extensions

    - by Scott Dorman
    Now that Visual Studio 2010 has been released, there are a lot of extensions being written. In fact, as of today (May 1, 2010 at 15:40 UTC) there are 809 results for Visual Studio 2010 in the Visual Studio Gallery. If you filter this list to show just the free items, there are still 251 extensions available. Given that number (and it is currently increasing weekly) it can be difficult to find extensions that are useful. Here is the list of extensions that I currently have installed and find useful: Word Wrap with Auto-Indent Indentation Matcher Extension Structure Adornment This also installs the following extensions: BlockTagger BlockTaggerImpl SettingsStore SettingsStoreImpl Source Outliner Triple Click ItalicComments Go To Definition Spell Checker Remove and Sort Using Format Document Open Folder in Windows Explorer Find Results Highlighter Regular Expressions Margin Indention Matcher Extension Word Wrap with Auto-Indent VSCommands HelpViewerKeywordIndex StyleCop Visual Studio Color Theme Editor PowerCommands for Visual Studio 2010 Extension Analyzer CodeCompare Team Founder Server Power Tools VS10x Selection Popup Color Picker Completion Numbered Bookmarks   Technorati Tags: Visual Studio,Extensions

    Read the article

  • Multiple Object Instantiation

    - by Ricky Baby
    I am trying to get my head around object oriented programming as it pertains to web development (more specifically PHP). I understand inheritance and abstraction etc, and know all the "buzz-words" like encapsulation and single purpose and why I should be doing all this. But my knowledge is falling short with actually creating objects that relate to the data I have in my database, creating a single object that a representative of a single entity makes sense, but what are the best practises when creating 100, 1,000 or 10,000 objects of the same type. for instance, when trying to display a list of the items, ideally I would like to be consistent with the objects I use, but where exactly should I run the query/get the data to populate the object(s) as running 10,000 queries seems wasteful. As an example, say I have a database of cats, and I want a list of all black cats, do I need to set up a FactoryObject which grabs the data needed for each cat from my database, then passes that data into each individual CatObject and returns the results in a array/object - or should I pass each CatObject it's identifier and let it populate itself in a separate query.

    Read the article

  • Coherence 3.7.1 Released

    - by JuergenKress
    Oracle Coherence 3.7.1 introduces REST API, exalogic infiniband integration, improved data access performance due to more efficient in-memory and disk-based storage, and query explain plan support and much more, download now! View the webcast: Unbeatable Performance for your Cloud Application Foundation. To download Coherence 3.7.1 please visit OTN. Coherence Screencasts: Coherence 3.7.1 – Extend Only Keys Coherence 3.7.1 – REST Support Coherence 3.7.1 – POF Object Identities and References Coherence 3.7.1 – POF Annotation Support Coherence 3.7.1 – Query Explain Plan For more information please visit the Oracle Coherence Knowledge Base For regular Coherence information become a member in the WebLogic Partner Community please first login at http://partner.oracle.com and then visit: http://www.oracle.com/partners/goto/wls-emea Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Coherence,Coherence 3.7.1,Oracle,WebLogic,J2EE caching,OPN,Jürgen Kress

    Read the article

  • HSSFS Part 3: SQL Saturday is Awesome! And DEFAULT_DOMAIN(), and how I found it

    - by Most Valuable Yak (Rob Volk)
    Just a quick post I should've done yesterday but I was recovering from SQL Saturday #48 in Columbia, SC, where I went to some really excellent sessions by some very smart experts.  If you have not yet attended a SQL Saturday, or its been more than 1 month since you last did, SIGN UP NOW! While searching the OBJECT_DEFINITION() of SQL Server system procedures I stumbled across the DEFAULT_DOMAIN() function in xp_grantlogin and xp_revokelogin.  I couldn't find any information on it in Books Online, and it's a very simple, self-explanatory function, but it could be useful if you work in a multi-domain environment.  It's also the kind of neat thing you can find by using this query: SELECT OBJECT_SCHEMA_NAME([object_id]) object_schema, name FROM sys.all_objects WHERE OBJECT_DEFINITION([object_id]) LIKE '%()%'  ORDER BY 1,2 I'll post some elaborations and enhancements to this query in a later post, but it will get you started exploring the functional SQL Server sea. UPDATE: I goofed earlier and said SQL Saturday #46 was in Columbia. It's actually SQL Saturday #48, and SQL Saturday #46 was in Raleigh, NC.

    Read the article

  • Should all foreign table references use foreign key constraints

    - by TecBrat
    Closely related to: Foreign key restrictions -> yes or no? I asked a question on SO and it led me to ask this here. If I'm faced with a choice of having a circular reference or just not enforcing the restraint, which is the better choice? In my particular case I have customers and addresses. I want an address to have a reference to a customer and I want each customer to have a default billing address id and a default shipping address id. I might query for all addresses that have a certain customer ID or I might query for the address with the ID that matches the default shipping or billing address ids. I'm not sure yet how the constraints (or lack of) will effect the system as my application and it's data age.

    Read the article

  • ApiChange Corporate Edition

    - by Alois Kraus
    In my inital announcement I could only cover a small subset what ApiChange can do for you. Lets look at how ApiChange can help you to fix bugs due to wrong usage of an Api within a fraction of time than it would take normally. It happens that software is tested and some bugs show up. One bug could be …. : We get way too man log messages during our test run. Now you have the task to find the most frequent messages and eliminate the Log calls from the source code. But what about the myriads other log calls? How can we check that the distribution of log calls is nearly equal across all developers? And if not how can we contact the developer to check his code? ApiChange can help you too connect these loose ends. It combines several information silos into one cohesive view. The picture below shows how it is able to fill the gaps. The public version does currently “only” parse the binaries and pdbs to give you for a –whousesmethod query the following colums: If it happens that you have Rational ClearCase (a source control system) in your development shop and an Active Directory in place then ApiChange will try to determine from the source file which was determined from the pdb the last check in user which should be present in your Active Directory. From there it is only a small hop to an LDAP query to your AD domain or the GC (Global Catalog) to get from the user name his Full name Email Phone number Department …. ApiChange will append this additional data all of your query results which contain source files if you add the –fileinfo option. As I said this is currently not enabled by default since the AD domain needs to be configured which are currently only some hard coded values in the SiteConstants.cs source file of ApiChange.Api.dll. Once you got this data you can generate metrics based on source file, developer, assembly, … and add additional data by drag and drop directly into the pivot tables inside Excel. This allows you to e.g. to generate a report which lists the source files with most log calls in descending order along with the developer name and email in the pivot table. Armed with this knowledge you can take meaningful measures e.g. to ask the developer if the huge number of log calls in this source file can be optimized. I am aware that this is a very specific scenario but it is a huge time saver when you are able to fill the missing gaps of information. ApiChange does this in an extensible way. namespace ApiChange.ExternalData {     public interface IFileInformationProvider     {         UserInfo GetInformationFromFile(string fileName);     } } It defines an interface where you can implement your custom information provider to close the gap between source control system and the real person I have to send an email to ask if his code needs a closer inspection.

    Read the article

  • BizTalk 2009 - Service Instances: Last 100

    - by StuartBrierley
    Having previously talked about the lack of the traditional HAT in BizTalk 2009, the question then becomes how do you replicate some of the functionality that was previsouly relied on? I have already covered the Last 100 Messages Received, the Last 100 Messages Sent, and the Last 50 Suspended Messages queries so what about service instances? The BizTalk 2009 Group Hub allows you to search for suspended service instances and also running service instances, but not the two together. In BizTalk 2004 we had a query in HAT to return the last 100 service instances.  Lets create a direct replacement in the BizTalk 2009 Hatless environment. Basically we are creating a query to search for the last one hundred tracked service instances:

    Read the article

  • How to join two collections with LINQ

    - by JustinGreenwood
    Here is a simple and complete example of how to perform joins on two collections with LINQ. I wrote it for a friend to show him, in one simple file, the power of LINQ queries and anonymous objects. In the file below, there are two simple data classes defined: Person and Item. In the beginning of the main method, two collections are created. Note that the Item's OwnerId field reference the PersonId of a Person object. The effect of the LINQ query below is equivalent to a SQL statement looking like this: select Person.PersonName as OwnerName, Item.ItemName as OwnedItem from Person inner join Item on Item.OwnerId = Person.PersonId order by Item.ItemName desc; using System; using System.Collections.Generic; using System.Linq; namespace LinqJoinAnonymousObjects { class Program { class Person { public int PersonId { get; set; } public string PersonName { get; set; } } class Item { public string ItemName { get; set; } public int OwnerId { get; set; } } static void Main(string[] args) { // Create two collections: one of people, and another with their possessions. var people = new List<Person> { new Person { PersonId=1, PersonName="Justin" }, new Person { PersonId=2, PersonName="Arthur" }, new Person { PersonId=3, PersonName="Bob" } }; var items = new List<Item> { new Item { OwnerId=1, ItemName="Armor" }, new Item { OwnerId=1, ItemName="Book" }, new Item { OwnerId=2, ItemName="Chain Mail" }, new Item { OwnerId=2, ItemName="Excalibur" }, new Item { OwnerId=3, ItemName="Bubbles" }, new Item { OwnerId=3, ItemName="Gold" } }; // Create a new, anonymous composite result for person id=2. var compositeResult = from p in people join i in items on p.PersonId equals i.OwnerId where p.PersonId == 2 orderby i.ItemName descending select new { OwnerName = p.PersonName, OwnedItem = i.ItemName }; // The query doesn't evaluate until you iterate through the query or convert it to a list Console.WriteLine("[" + compositeResult.GetType().Name + "]"); // Convert to a list and loop through it. var compositeList = compositeResult.ToList(); Console.WriteLine("[" + compositeList.GetType().Name + "]"); foreach (var o in compositeList) { Console.WriteLine("\t[" + o.GetType().Name + "] " + o.OwnerName + " - " + o.OwnedItem); } Console.ReadKey(); } } } The output of the program is below: [WhereSelectEnumerableIterator`2] [List`1] [<>f__AnonymousType1`2] Arthur - Excalibur [<>f__AnonymousType1`2] Arthur - Chain Mail

    Read the article

  • SCCM 2007 Collections per OU

    - by VirtualizeIT
    Recently I wanted to create our SCCM collections setup as our Active Directory structure. I finally figured out how to create collections per OU of the domain. I decided to create a simple tutorial that may help other IT professionals the steps to complete this task.   1. Open the ConfigMgr and navigation to the collections. To navigate to the collections go to Site Database>Computer Management>Collections. 2. In the ‘Collections’ right-click and select New Collections. Then it will pop up a Wizards so you can enter the name of the collection and any notes that you may want to add that is associated with the collection.                       3. Next, select the database icon. In the ‘Name’ textbox enter the name of the query. I named mine ‘Query’ just for simplicity sake. After you enter the name select ‘Edit Query Statement…’ 4. Select the ‘Criteria’ tab 5. Select the icon that looks like a sun. 6. At this point you should see a dialog box like this…                     7. Next, click the ‘select’ button. 8. Under the ‘Attribute class’ scroll through until you see ’System Resource’ and for the ‘Attribute"’ scroll through you see ‘System OU Name’. It should look something like this…                 9. After that select OK. 10. In the ‘Value’ textbox enter the string that is associated with the OU in your domain. NOTE: If you don’t know your string name for your OU you can simply go to “Active Directory Users and Computers” and right-click on the OU and select properties. In the ‘object’ tab you should see the string under the ‘Canonical name of object”. That is the string that you put in the ‘Value’ text box. 11. After you enter the OU string name press OK>OK>OK>NEXT>NEXT>FINISH.   That’s it!   I hope this tutorial has help you understand how to create a collection through your OU structure.

    Read the article

  • How to write good blog post tags

    - by keruilin
    It seems that you have three choices in deciding how you write tags for your blog posts: Make them user friendly Make them highly searchable Combo of the two For example, let's say that I have a blog post that has write-ups on the top 10 ipad apps for business travel (e.g., Evernote, Dragon Diction, Instapaper, etc.). User friendly tags: ipad apps, business travel Searchable keywords (analyzed with Google Keyword Analyzer): ipad apps, ipad travel apps, evernote ipad, instapaper, instapaper ipad Combo: ipad apps, ipad travel apps So my question comes down to this: which is really the best choice -- 1, 2 or 3? Note: this visible post tags will also serve as the meta keywords for the post page.

    Read the article

  • I need recommendations on free, open source, PHP-based business intelligence widget frameworks [on hold]

    - by Volomike
    I'm a PHP developer on Linux, and my manager wants a business intelligence dashboard. He wants to see in real-time our profit/loss stuff in fancy charts, based on our software sales. I could code it all from scratch and use Google Charts API or some other charts API to help me. However, I wanted to know if there was a free, open source, PHP-based business intelligence package out there, or some sort of widget framework that I could start with. That way, I can build the BI widgets inside that framework and not have to do everything from scratch. I apologize ahead of time if this is the wrong stackexchange where to place this query. I don't know where to place this query, and do want to follow the rules.

    Read the article

  • BigQuery - Best Practices for Running Queries on Massive Datasets

    BigQuery - Best Practices for Running Queries on Massive Datasets Join Michael Manoochehri and Ryan Boyd from the big data Developer Relations team on Friday, September 21th, at 10am PDT, as they discuss best practices for answering questions about massive datasets with Google BigQuery. They'll explore interesting Big Data use cases with some of our public datasets, using BigQuery's SQL-like language to return query results in seconds. They will also cover some of BigQuery's unique query functions as well. For a general overview of BigQuery, watch our overview video: youtu.be Please use the moderator below (goo.gl to ask your questions, which will be answered live! More info here: developers.google.com From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • "continue" and "break" for static analysis

    - by B. VB.
    I know there have been a number of discussions of whether break and continue should be considered harmful generally (with the bottom line being - more or less - that it depends; in some cases they enhance clarity and readability, but in other cases they do not). Suppose a new project is starting development, with plans for nightly builds including a run through a static analyzer. Should it be part of the coding guidelines for the project to avoid (or strongly discourage) the use of continue and break, even if it can sacrifice a little readability and require excessive indentation? I'm most interested in how this applies to C code. Essentially, can the use of these control operators significantly complicate the static analysis of the code possibly resulting in additional false negatives, that would otherwise register a potential fault if break or continue were not used? (Of course a complete static analysis proving the correctness of an aribtrary program is an undecidable proposition, so please keep responses about any hands-on experience with this you have, and not on theoretical impossibilities) Thanks in advance!

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >