Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 560/1698 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • How can I hit my database with an AJAX call using javascript?

    - by tmedge
    I am pretty new at this stuff, so bear with me. I am using ASP.NET MVC. I have created an overlay to cover the page when someone clicks a button corresponding to a certain database entry. Because of this, ALL of my code for this functionality is in a .js file contained within my project. What I need to do is pull the info corresponding to my entry from the database itself using an AJAX call, and place that into my textboxes. Then, after the end-user has made the desired changes, I need to update that entry's values to match the input. I've been surfing the web for a while, and have failed to find an example that fits my needs effectively. Here is my code in my javascript file thus far: function editOverlay(picId) { //pull up an overlay $('body').append('<div class="overlay" />'); var $overlayClass = $('.overlay'); $overlayClass.append('<div class="dataModal" />'); var $data = $('.dataModal'); overlaySetup($overlayClass, $data); //set up form $data.append('<h1>Edit Picture</h1><br /><br />'); $data.append('Picture name: &nbsp;'); $data.append('<input class="picName" /> <br /><br /><br />'); $data.append('Relative url: &nbsp;'); $data.append('<input class="picRelURL" /> <br /><br /><br />'); $data.append('Description: &nbsp;'); $data.append('<textarea class="picDescription" /> <br /><br /><br />'); var $nameBox = $('.picName'); var $urlBox = $('.picRelURL'); var $descBox = $('.picDescription'); var pic = null; //this is where I need to pull the actual object from the db //var imgList = for (var temp in imgList) { if (temp.Id == picId) { pic= temp; } } /* $nameBox.attr('value', pic.Name); $urlBox.attr('value', pic.RelativeURL); $descBox.attr('value', pic.Description); */ //close buttons $data.append('<input type="button" value="Save Changes" class="saveButton" />'); $data.append('<input type="button" value="Cancel" class="cancelButton" />'); $('.saveButton').click(function() { /* pic.Name = $nameBox.attr('value'); pic.RelativeURL = $urlBox.attr('value'); pic.Description = $descBox.attr('value'); */ //make a call to my Save() method in my repository CloseOverlay(); }); $('.cancelButton').click(function() { CloseOverlay(); }); } The stuff I have commented out is what I need to accomplish and/or is not available until prior issues are resolved. Any and all advice is appreciated! Remember, I am VERY new to this stuff (two weeks, to be exact) and will probably need highly explicit instructions. BTW: overlaySetup() and CloseOverlay() are functions I have living someplace else. Thanks!

    Read the article

  • Where do I find scripts generated by SharePoint MCMS Migration Profiles

    - by HipCzeck
    I am attempting to migrate data from an Microsoft Content Management Server (MCMS) 2002 instance into a new Microsoft Office Sharepoint Server (MOSS) 2007 installation using the Manage Microsoft Content Management Server Migration Profiles tool in the Operations space of MOSS Central Administration. When analyzing the profile, I receive 4 warnings, all of which may be safely ignored, but when I actually execute the migration profile, I get the same warnings and an additional error with a description of: Line 6: Incorrect syntax near ';'. I have seen this error numerous times when mucking about in SQL Server and recognize it as a Transact SQL error message, but can't find the actual SQL statement that is being executed so that I may determine the source of the error. EDIT: After enabling verbose logging on the MCMS 2002 Migration category, and poring through the Unified Logging Service (ULS) logs, I received a more complete stack trace at the point of the error, and a couple more anomalies listed below. Anomalies: The following is an abbreviated listing from the ULS logs around the time of the pre-migration analysis. 01 MCMS 2002 Migration Verbose Start ConnectionCheck 02 MCMS 2002 Migration Verbose End ConnectionCheck 03 MCMS 2002 Migration Verbose Start DatabaseCheck 04 MCMS 2002 Migration High Extra table SiteDeployLock will not be migrated 05 MCMS 2002 Migration High Analysis: Extra index PK__SiteDeployLock__05D8E0BE 06 MCMS 2002 Migration Verbose End DatabaseCheck 07 MCMS 2002 Migration Medium Pre-migration analysis: RootCheckTask is skipped because database check is blocked. 08 MCMS 2002 Migration Medium Pre-migration analysis: RightsGroupNameCheckTask is skipped because database check is blocked. 09 MCMS 2002 Migration Medium Pre-migration analysis: InvalidNameCheckTask is skipped because database check is blocked. 10 MCMS 2002 Migration Medium Pre-migration analysis: LeafNameCheckTask is skipped because database check is blocked. 11 MCMS 2002 Migration Medium Pre-migration analysis: LeafLengthCheckTask is skipped because database check is blocked. 12 MCMS 2002 Migration Medium Pre-migration analysis: TemplateNameCheckTask is skipped because database check is blocked. 13 MCMS 2002 Migration Medium Pre-migration analysis: TemplateCollisionCheckTask is skipped because database check is blocked. 14 MCMS 2002 Migration Medium Pre-migration analysis: PlaceholderCheckTask is skipped because database check is blocked. 15 MCMS 2002 Migration Medium Pre-migration analysis: CheckedOutItemsCheckTask is skipped because database check is blocked. 16 MCMS 2002 Migration Medium Pre-migration analysis: SubmittedItemsCheckTask is skipped because database check is blocked. 17 MCMS 2002 Migration Medium Pre-migration analysis: DeletedItemsCheckTask is skipped because database check is blocked. 18 MCMS 2002 Migration Medium Pre-migration analysis: UserCheckTask is skipped because database check is blocked. 19 MCMS 2002 Migration Medium Pre-migration analysis: FileSizeCheckTask is skipped because database check is blocked. 20 MCMS 2002 Migration Medium Pre-migration analysis: HostHeaderMapCheckTask is skipped because database check is blocked. 21 MCMS 2002 Migration Verbose Start Server check 22 MCMS 2002 Migration Verbose End Server check 23 MCMS 2002 Migration Verbose Start Server emptyness check 24 MCMS 2002 Migration Verbose End Server emptyness check 25 MCMS 2002 Migration Medium PreMigrationAnalyzer: Dry run starts 26 MCMS 2002 Migration Verbose CleanLockProcedure: start. 27 MCMS 2002 Migration High CleanLockProcedure: connection system lock is null 28 MCMS 2002 Migration Verbose Finished all tasks 29 MCMS 2002 Migration High PreMigrationAnalyzer ends with True 30 MCMS 2002 Migration Verbose Migration profile status is changed to AnalysisPassed Specifically, the two High level alerts on lines 4 and 5 are reflected in the migration report as warnings when running Pre-migration Analysis or running the migration profile. In addition, two other warnings appear in the migration report indicating two tables containing data (LayoutProperty and NodeLayout) that should be empty. According to the documentation, warnings are not sufficient cause to stop migration from occurring. Other anomalies are on lines 7-20 indicating a series of tests that are skipped because database check is blocked. The ULS doesn't give any additional warnings to indicate that the database check was blocked or exited in exceptional circumstances. After switching the profile from pre-migration analysis to exporting, there is one medium level warning that LastChangeTime is not set or incorrect. (null). As with all the skipped test names and SQL table names from the warnings, the major search engines are unable (with the exception of LayoutProperty) to find any reference to these objects or tests. Finally, the section of the log indicating the actual live migration attempt is appended below: 01 MCMS 2002 Migration Medium LastChangeTime is not set or incorrect. (null) 02 MCMS 2002 Migration Verbose Set export lock 03 MCMS 2002 Migration Verbose CleanLockProcedure: start. 04 MCMS 2002 Migration Verbose CleanLockProcedure: end. 05 MCMS 2002 Migration Verbose Prepare for export 06 MCMS 2002 Migration Verbose Open connection... 07 MCMS 2002 Migration Verbose Create temporary stored procedures 08 MCMS 2002 Migration Verbose Create temporary tables... 09 MCMS 2002 Migration Verbose Initialize temporary tables... 10 MCMS 2002 Migration Verbose InitializeTemporaryTables: start 11 MCMS 2002 Migration Verbose Initialize export table... 12 MCMS 2002 Migration Verbose InitializeExportTable: start 13 MCMS 2002 Migration Verbose CleanLockProcedure: start. 14 MCMS 2002 Migration Verbose CleanLockProcedure: end. 15 MCMS 2002 Migration High Migration throws exception: Line 6: Incorrect syntax near ';'.. Stacktrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration... 16 MCMS 2002 Migration High ....MigrationBatchCommand.ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand.ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer.SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Admin... 17 MCMS 2002 Migration High ...stration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export(MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.MigrateInternal(). 18 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. Start. 19 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. End. 20 MCMS 2002 Migration Verbose Migration profile status is changed to Failed The stack trace of the failed parsing of the SQL command appear on lines 15-17. A cleaner version of the stack trace is appended below. Full Stack Trace: Migration throws exception: Line 6: Incorrect syntax near ';'.. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning( TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer .SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export (MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration .MigrateInternal(). None of this log information indicates the SQL command that is failing a parser check. I've checked the SQL servers hosting the source and destination databases for a trace of the query, but neither seems to have triggered the parse failure condition. That appears to have happened on the SharePoint server. Are there any other locations I should investigate that might tell me where to find the source of the error?

    Read the article

  • Fake ISAPI Handler to serve static files with extention that are rewritted by url rewriter

    - by developerit
    Introduction I often map html extention to the asp.net dll in order to use url rewritter with .html extentions. Recently, in the new version of www.nouvelair.ca, we renamed all urls to end with .html. This works great, but failed when we used FCK Editor. Static html files would not get serve because we mapped the html extension to the .NET Framework. We can we do to to use .html extension with our rewritter but still want to use IIS behavior with static html files. Analysis I thought that this could be resolve with a simple HTTP handler. We would map urls of static files in our rewriter to this handler that would read the static file and serve it, just as IIS would do. Implementation This is how I coded the class. Note that this may not be bullet proof. I only tested it once and I am sure that the logic behind IIS is more complicated that this. If you find errors or think of possible improvements, let me know. Imports System.Web Imports System.Web.Services ' Author: Nicolas Brassard ' For: Solutions Nitriques inc. http://www.nitriques.com ' Date Created: April 18, 2009 ' Last Modified: April 18, 2009 ' License: CPOL (http://www.codeproject.com/info/cpol10.aspx) ' Files: ISAPIDotNetHandler.ashx ' ISAPIDotNetHandler.ashx.vb ' Class: ISAPIDotNetHandler ' Description: Fake ISAPI handler to serve static files. ' Usefull when you want to serve static file that has a rewrited extention. ' Example: It often map html extention to the asp.net dll in order to use url rewritter with .html. ' If you want to still serve static html file, add a rewritter rule to redirect html files to this handler Public Class ISAPIDotNetHandler Implements System.Web.IHttpHandler Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest ' Since we are doing the job IIS normally does with html files, ' we set the content type to match html. ' You may want to customize this with your own logic, if you want to serve ' txt or xml or any other text file context.Response.ContentType = "text/html" ' We begin a try here. Any error that occurs will result in a 404 Page Not Found error. ' We replicate the behavior of IIS when it doesn't find the correspoding file. Try ' Declare a local variable containing the value of the query string Dim uri As String = context.Request("fileUri") ' If the value in the query string is null, ' throw an error to generate a 404 If String.IsNullOrEmpty(uri) Then Throw New ApplicationException("No fileUri") End If ' If the value in the query string doesn't end with .html, then block the acces ' This is a HUGE security hole since it could permit full read access to .aspx, .config, etc. If Not uri.ToLower.EndsWith(".html") Then ' throw an error to generate a 404 Throw New ApplicationException("Extention not allowed") End If ' Map the file on the server. ' If the file doesn't exists on the server, it will throw an exception and generate a 404. Dim fullPath As String = context.Server.MapPath(uri) ' Read the actual file Dim stream As IO.StreamReader = FileIO.FileSystem.OpenTextFileReader(fullPath) ' Write the file into the response context.Response.Output.Write(stream.ReadToEnd) ' Close and Dipose the stream stream.Close() stream.Dispose() stream = Nothing Catch ex As Exception ' Set the Status Code of the response context.Response.StatusCode = 404 'Page not found ' For testing and bebugging only ! This may cause a security leak ' context.Response.Output.Write(ex.Message) Finally ' In all cases, flush and end the response context.Response.Flush() context.Response.End() End Try End Sub ' Automaticly generated by Visual Studio ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable Get Return False End Get End Property End Class Conclusion As you see, with our static files map to this handler using query string (ex.: /ISAPIDotNetHandler.ashx?fileUri=index.html) you will have the same behavior as if you ask for the uri /index.html. Finally, test this only in IIS with the html extension map to aspnet_isapi.dll. Url rewritting will work in Casini (Internal Web Server shipped with Visual Studio) but it’s not the same as with IIS since EVERY request is handle by .NET. Versions First release

    Read the article

  • Tutorial: Criando um Componente para o UCM

    - by Denisd
    Então você já instalou o UCM, seguindo o tutorial: http://blogs.oracle.com/ecmbrasil/2009/05/tutorial_de_instalao_do_ucm.html e também já fez o hands-on: http://blogs.oracle.com/ecmbrasil/2009/10/tutorial_de_ucm.html e agora quer ir além do básico? Quer começar a criar funcionalidades para o UCM? Quer se tornar um desenvolvedor do UCM? Quer criar o Content Server à sua imagem e semelhança?! Pois hoje é o seu dia de sorte! Neste tutorial, iremos aprender a criar um componente para o Content Server. O nosso primeiro componente, embora não seja tão simples, será feito apenas com recursos do Content Server. Em um futuro tutorial, iremos aprender a usar classes java como parte de nossos componentes. Neste tutorial, vamos desenvolver um recurso de Favoritos, aonde os usuários poderão marcar determinados documentos como seus Favoritos, e depois consultar estes documentos em uma lista. Não iremos montar o componente com todas as suas funcionalidades, mas com o que vocês verão aqui, será tranquilo aprimorar este componente, inclusive para ambientes de produção. Componente MyFavorites Algumas características do nosso componente favoritos: - Por motivos de espaço, iremos montar este componente de uma forma “rápida e crua”, ou seja, sem seguir necessariamente as melhores práticas de desenvolvimento de componentes. Para entender melhor a prática de desenvolvimento de componentes, recomendo a leitura do guia Working With Components. - Ele será desenvolvido apenas para português-Brasil. Outros idiomas podem ser adicionados posteriormente. - Ele irá apresentar uma opção “Adicionar aos Favoritos” no menu “Content Actions” (tela Content Information), para que o usuário possa definir este arquivo como um dos seus favoritos. - Ao clicar neste link, o usuário será direcionado à uma tela aonde ele poderá digitar um comentário sobre este favorito, para facilitar a leitura depois. - Os favoritos ficarão salvos em uma tabela de banco de dados que iremos criar como parte do componente - A aba “My Content Server” terá uma opção nova chamada “Meus Favoritos”, que irá trazer uma tela que lista os favoritos, permitindo que o usuário possa deletar os links - Alguns recursos ficarão de fora deste exercício, novamente por motivos de espaço. Mas iremos listar estes recursos ao final, como exercícios complementares. Recursos do nosso Componente O componente Favoritos será desenvolvido com alguns recursos. Vamos conhecer melhor o que são estes recursos e quais são as suas funções: - Query: Uma query é qualquer atividade que eu preciso executar no banco, o famoso CRUD: Criar, Ler, Atualizar, Deletar. Existem diferentes jeitos de chamar a query, dependendo do propósito: Select Query: executa um comando SQL, mas descarta o resultado. Usado apenas para testar se a conexão com o banco está ok. Não será usado no nosso exercício. Execute Query: executa um comando SQL que altera informações do banco. Pode ser um INSERT, UPDATE ou DELETE. Descarta os resultados. Iremos usar Execute Query para criar, alterar e excluir os favoritos. Select Cache Query: executa um comando SQL SELECT e armazena os resultados em um ResultSet. Este ResultSet retorna como resultado do serviço e pode ser manipulado em IDOC, Java ou outras linguagens. Iremos utilizar Select Cache Query para retornar a lista de favoritos de um usuário. - Service: Os serviços são os responsáveis por executar as queries (ou classes java, mas isso é papo para um outro tutorial...). O serviço recebe os parâmetros de entrada, executa a query e retorna o ResultSet (no caso de um SELECT). Os serviços podem ser executados através de templates, páginas IDOC, outras aplicações (através de API), ou diretamente na URL do browser. Neste exercício criaremos serviços para Criar, Editar, Deletar e Listar os favoritos de um usuário. - Template: Os templates são as interfaces gráficas (páginas) que serão apresentadas aos usuários. Por exemplo, antes de executar o serviço que deleta um documento do favoritos, quero que o usuário veja uma tela com o ID do Documento e um botão Confirma, para que ele tenha certeza que está deletando o registro correto. Esta tela pode ser criada como um template. Neste exercício iremos construir templates para os principais serviços, além da página que lista todos os favoritos do usuário e apresenta as ações de editar e deletar. Os templates nada mais são do que páginas HTML com scripts IDOC. A nossa sequência de atividades para o desenvolvimento deste componente será: - Criar a Tabela do banco - Criar o componente usando o Component Wizard - Criar as Queries para inserir, editar, deletar e listar os favoritos - Criar os Serviços que executam estas Queries - Criar os templates, que são as páginas que irão interagir com os usuários - Criar os links, na página de informações do conteúdo e no painel My Content Server Pois bem, vamos começar! Confira este tutorial na íntegra clicando neste link: http://blogs.oracle.com/ecmbrasil/Tutorial_Componente_Banco.pdf   Happy coding!  :-)

    Read the article

  • MDX using EXISTING, AGGREGATE, CROSSJOIN and WHERE

    - by James Rogers
    It is a well-published approach to using the EXISTING function to decode AGGREGATE members and nested sub-query filters.  Mosha wrote a good blog on it here and a more recent one here.  The use of EXISTING in these scenarios is very useful and sometimes the only option when dealing with multi-select filters.  However, there are some limitations I have run across when using the EXISTING function against an AGGREGATE member:   The AGGREGATE member must be assigned to the Dimension.Hierarchy being detected by the EXISTING function in the calculated measure. The AGGREGATE member cannot contain a crossjoin from any other dimension or hierarchy or EXISTING will not be able to detect the members in the AGGREGATE member.   Take the following query (from Adventure Works DW 2008):   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]})'   select   {[Week Count]} on columns from   [Adventure Works]     where   [Date].[Fiscal Weeks].[CM]   Here we are attempting to count the existing fiscal weeks in slicer.  This is useful to get a per-week average for another member. Many applications generate queries in this manner (such as Oracle OBIEE).  This query returns the correct result of (4) weeks. Now let's put a twist in it.  What if the querying application submits the query in the following manner:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Customer].[Customer Geography].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]})'   select   {[Week Count]} on columns from   [Adventure Works]     where   [Customer].[Customer Geography].[CM]   Here we are attempting to count the existing fiscal weeks in slicer.  However, the AGGREGATE member is built on a different dimension (in name) than the one EXISTING is trying to detect.  In this case the query returns (174) which is the total number of [Date].[Fiscal Weeks].[Fiscal Week].members defined in the dimension.   Now another twist, the AGGREGATE member will be named appropriately and contain the hierarchy we are trying to detect with EXISTING but it will be cross-joined with another hierarchy:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}*    {[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})'  select   {[Week Count]} on columns from   [Adventure Works]    where   [Date].[Fiscal Weeks].[CM]   Once again, we are attempting to count the existing fiscal weeks in slicer.  Again, in this case the query returns (174) which is the total number of [Date].[Fiscal Weeks].[Fiscal Week].members defined in the dimension. However, in 2008 R2 this query returns the correct result of 4 and additionally , the following will return the count of existing countries as well (2):   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'   member [Country Count] as 'count(existing([Customer].[Customer Geography].[Country].members))'  member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}*    {[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})'  select   {[Week Count]} on columns from   [Adventure Works]    where   [Date].[Fiscal Weeks].[CM]   2008 R2 seems to work as long as the AGGREGATE member is on at least one of the hierarchies attempting to be detected (i.e. [Date].[Fiscal Weeks] or [Customer].[Customer Geography]). If not, it seems that the engine cannot find a "point of entry" into the aggregate member and ignores it for calculated members.   One way around this would be to put the sets from the AGGREGATE member explicitly in the WHERE clause (slicer).  I realize this is only supported in SSAS 2005 and 2008.  However, after talking with Chris Webb (his blog is here and I highly recommend following his efforts and musings) it is a far more efficient way to filter/slice a query:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    select   {[Week Count]} on columns from   [Adventure Works]    where   ({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}   ,{[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})   This query returns the correct result of (4) weeks.  Additionally, we can count the cross-join members of the two hierarchies in the slicer:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members)*existing([Customer].[Customer Geography].[Country].members))'    select   {[Week Count]} on columns from   [Adventure Works]    where   ({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}   ,{[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})   We get the correct number of (8) here.

    Read the article

  • Existential CAML - does an item exist?

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved More CAML and existence. In “SharePoint List Issues” and “Passing the CAML thru the EY of the NEEDL we saw how to use CAML to return a subset of a list and also how to check the existence of lists, fields, defaults, and values.   Here is a general function that may be used to get a subset of a list by comparing a “text” type field to a given value.  The function is pretty smart. It can be used to check existence or to return a collection of items that may be further processed. It handles non existing fields and replaces them with the ubiquitous “Title”, but only once!  /// Build an SPQuery that returns a selected set of columns from a List /// titleField must be a "Text" type field /// When the titleField parameter is empty ("") "Title" is assumed /// When the title parameter is empty ("") All is assumed /// When the columnNames parameter is null, the query returns all the fields /// When the rowLimit parameter is 0, the query return all the items. /// with a non-zero, the query returns at most rowLimits /// /// usage: to check if an item titled "Blah" exists in your list, do: /// colNames = {"Title"} /// col = GetListItemColumnByTitle(myList, "", "Blah", colNames, 1) /// Check the col.Count. if > 0 the item exists and is in the collection private static SPListItemCollection GetListItemColumnByTitle(SPList list, string titleField, string title, string[] columnNames, uint rowLimit) {   try   {     char QT = Convert.ToChar((int)34);     SPQuery query = new SPQuery();     if (title != "")     {       string tf = titleField;       if (titleField == "") tf = "Title";       tf = CAMLThisName(list, tf, "Title");        StringBuilder titleQuery = new StringBuilder  ("<Where><Eq><FieldRef Name=");       titleQuery.Append(QT);       titleQuery.Append(tf);       titleQuery.Append(QT);       titleQuery.Append("/><Value Type=");       titleQuery.Append(QT);       titleQuery.Append("Text");       titleQuery.Append(QT);       titleQuery.Append(">");       titleQuery.Append(title);       titleQuery.Append("</Value></Eq></Where>");       query.Query = titleQuery.ToString();     }     if (columnNames.Length != 0)     {       StringBuilder sb = new StringBuilder("");       bool TitleAlreadyIncluded = false;       foreach (string columnName in columnNames)       {         string tst = CAMLThisName(list, columnName, "Title");         //Allow Title only once         if (tst != "Title" || !TitleAlreadyIncluded)         {           sb.Append("<FieldRef Name=");           sb.Append(QT);           sb.Append(tst);           sb.Append(QT);           sb.Append("/>");           if (tst == "Title") TitleAlreadyIncluded = true;         }       }       query.ViewFields = sb.ToString();     }     if (rowLimit > 0)     {        query.RowLimit = rowLimit;     }     SPListItemCollection col = list.GetItems(query);     return col;   }   catch (Exception ex)   {     //Console.WriteLine("GetListItemColumnByTitle" + ex.ToString());     //sw.WriteLine("GetListItemColumnByTitle" + ex.ToString());     return null;   } } Here I called it for a list in which “Author” (it is the internal name for “Created”) and “Blah” do not exist. The list of column names is:  string[] columnNames = {"Test Column1", "Title", "Author", "Allow Multiple Ratings", "Blah"};  So if I use this call, I get all the items for which “01-STD MIL_some” has the value of 1. the fields returned are: “Test Column1”, “Title”, and “Allow Multiple Ratings”. Because “Title” was already included and the default for non exixsting is “Title”, it was not replicated for the 2 non-existing fields.  SPListItemCollection col = GetListItemColumnByTitle(masterList, "01-STD MIL_some", "1", columnNames, 0); The following call checks if there are any items where “01-STD MIL_some” has the value of “1”. Note that I limited the number of returned items to 1.  SPListItemCollection col = GetListItemColumnByTitle(masterList, "01-STD MIL_some", "1", columnNames, 1); The code also uses the CAMLThisName function that checks for an existence of a field and returns its InternalName. This is yet another useful function that I use again and again.  /// <summary> /// return a fields internal name (CAMLName)  /// or the "default" name that you passed. /// To check existence pass "" or some funny name like "mud in your eye" /// </summary> public static string CAMLThisName(SPList list, string name, string def) {   String CAMLName = def;   SPField fld = GetFieldByName(list, name);   if (fld != null)   {      CAMLName = fld.InternalName;   }   return CAMLName; } That’s all folks?!

    Read the article

  • How to update model in the database, from asp.net MVC2, using Entity Framework?

    - by Eedoh
    Hello. I'm building ASP.NET MVC2 application, and using Entity Framework as ORM. I am having troubles updating object in the database. Every time I try entity.SaveChanges(), EF inserts new line in the table, regardless of do I want update, or insert to be done. I tried attaching (like in this next example) object to entity, but then I got {"An object with a null EntityKey value cannot be attached to an object context."} Here's my simple function for inserts and updates (it's not really about vehicles, but it's simpler to explain like this, although I don't think that this effects answers at all)... public static void InsertOrUpdateCar(this Vehicles entity, Cars car) { if (car.Id == 0 || car.Id == null) { entity.Cars.AddObject(car); } else { entity.Attach(car); } entitet.SaveChanges(); } I even tried using AttachTo("Cars", car), but I got the same exception. Anyone has experience with this?

    Read the article

  • How to use Excel VBA to extract Memo field from Access Database?

    - by the.jxc
    I have an Excel spreadsheet. I am connecting to an Access database via ODBC. Something along then lines of: Set dbEng = CreateObject("DAO.DBEngine.40") Set oWspc = dbEng.CreateWorkspace("ODBCWspc", "", "", dbUseODBC) Set oConn = oWspc.OpenConnection("Connection", , True, "ODBC;DSN=CLIENTDB;") Then I use a query and fetch a result set to get some table data. Set oQuery = oConn.CreateQueryDef("tmpQuery") oQuery.Sql = "SELECT idField, memoField FROM myTable" Set oRs = oQuery.OpenRecordset The problem now arises. My field is a dbMemo because the maximum content length is up to a few hundred chars. It's not that long, and in fact the value I'm reading is only a dozen characters. But Excel just doesn't seem able to handle the Memo field content at all. My code... ActiveCell = oRs.Fields("memoField") ...gives error Run-time error '3146': ODBC--call failed. Any suggestions? Can Excel VBA actually get at memo field data? Or is it just completely impossible. I get exactly the same error from GetChunk as well. ActiveCell = oRs.Fields("memoField").GetChunk(0, 2) ...also gives error Run-time error '3146': ODBC--call failed. Converting to a text field makes everything work fine. However some data is truncated to 255 characters of course, which means that isn't a workable solution.

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • Extending MySQLi

    - by FRKT
    Hello, I've run into problems extending the MySQLi class. It won't let me add any properties. class MySQLii extends MySQLi { public $database; public function MySQLii($host, $username, $password, $database){ // Initialize MySQLi parent::MySQLi($host, $username, $password, $database); // Save database name $this->database = $database; } } $mysqlii = new MySQLii('localhost', 'root', 'password', 'database'); var_dump($mysqlii); object(MySQLii)#1 (17) { ["affected_rows"]= int(0) ["client_info"]= string(48) "mysqlnd 5.0.5-dev - 081106 - $Revision: 289630 $" ["client_version"]= int(50005) ["connect_errno"]= int(0) ["connect_error"]= NULL ["errno"]= int(0) ["error"]= string(0) "" ["field_count"]= int(0) ["host_info"]= string(42) "MySQL host info: Localhost via UNIX socket" ["info"]= NULL ["insert_id"]= int(0) ["server_info"]= string(6) "5.1.44" ["server_version"]= int(50144) ["sqlstate"]= string(5) "00000" ["protocol_version"]= int(10) ["thread_id"]= int(4019) ["warning_count"]= int(0) } Note the absence of the database property I added in the MySQLii constructor. Am I missing something?

    Read the article

  • Accessing native iPhone addressbook database and performing add and delete contacts?

    - by chaitanya
    Hi, In my application I need to implement the address book which should contains the native addressbook details, and the user should be able to add and delete from the address book and it should be updated in the native iphone addressbook. I read somewhere that the iphone native address book database is accesible. In documentation also I saw that addContact and Delete API's are exposed to addressbook. Can anyone please tell me how can I access the native AddressBook of the iphone, and.. how to add and delete contacts from the address book? Can anyone post the sample code for this?

    Read the article

  • DataTable.Select Behaves Strangely Using ISNULL Operator on NULL DateTime Column

    - by Paul Williams
    I have a DataTable with a DateTime column, "DateCol", that can be DBNull. The DataTable has one row in it with a NULL value in this column. I am trying to query rows that have either DBNull value in this column or a date that is greater than today's date. Today's date is 5/11/2010. I built a query to select the rows I want, but it did not work as expected. The query was: string query = "ISNULL(DateCol, '" + DateTime.MaxValue + "'") > "' + DateTime.Today "'" This results in the following query: "ISNULL(DateCol, '12/31/9999 11:59:59 PM') > '5/11/2010'" When I run this query, I get no results. It took me a while to figure out why. What follows is my investigation in the Visual Studio immediate window: > dt.Rows.Count 1 > dt.Rows[0]["DateCol"] {} > dt.Rows[0]["DateCol"] == DBNull.Value true > dt.Select("ISNULL(DateCol,'12/31/9999 11:59:59 PM') > '5/11/2010'").Length 0 <-- I expected 1 Trial and error showed a difference in the date checks at the following boundary: > dt.Select("ISNULL(DateCol, '12/31/9999 11:59:59 PM') > '2/1/2000'").Length 0 > dt.Select("ISNULL(DateCol, '12/31/9999 11:59:59 PM') > '1/31/2000'").Length 1 <-- this was the expected answer The query works fine if I wrap the DateTime field in # instead of quotes. > dt.Select("ISNULL(DateCol, #12/31/9999#) > #5/11/2010#").Length 1 My machine's regional settings is currently set to EN-US, and the short date format is M/d/yyyy. Why did the original query return the wrong results? Why would it work fine if the date was compared against 1/31/2000 but not against 2/1/2000?

    Read the article

  • How can I access the group of a linq group-by query from a nested repeater control?

    - by Duke
    I'm using a linq group by query (with two grouping parameters) and would like to use the resulting data in a nested repeater. var dateGroups = from row in data.AsEnumerable() group row by new { StartDate = row["StartDate"], EndDate = row["EndDate"] }; "data" is a DataTable from an SqlDataAdapter-filled DataSet. "dateGroups" is used in the parent repeater, and I can access the group keys using Eval("key.StartDate") and Eval("key.EndDate"). Since dateGroups actually contains all the data rows grouped neatly by Start/End date, I'd like to access those rows to display the data in a child repeater. To what would I set the child repeater's DataSource? I have tried every expression in markup I could think of; I think the problem is that I'm trying to access an anonymous member (and I don't know how.) In case it doesn't turn out to be obvious, what would be the expression to access the elements in each iteration of the child repeater? Is there an expression that would let me set the DataSource in the markup, or will it have to be in the codebehind on some event in the parent repeater?

    Read the article

  • How to identify specific chipset driver on client to add to WDS driver database?

    - by Sandy
    I have two new models to support, Optiplex 790 and Lenovo L520. I need to add their chipset drivers to the Windows Deployment server and Intel's driver package is a suite of chipset drivers. WDS discovers way too many: Prior to installing the chipset, SM_Bus shows a yellow exclamation in the Device Manager. When I install Intel's suite, the drivers are installed and SM_Bus is gone. How can I identify which drivers are being used?

    Read the article

  • Best practices for combining Lucene.NET and a relational database?

    - by FlySwat
    I'm working on a project where I will have a LOT of data, and it will be searchable by several forms that are very efficiently expressed as SQL Queries, but it also needs to be searched via natural language processing. My plan is to build an index using Lucene for this form of search. My question is that if I do this, and perform a search, Lucene will then return the ID's of matching documents in the index, I then have to lookup these entities from the relational database. This could be done in two ways (That I can think of so far): N amount of queries (Horrible) Pass all the ID's to a stored procedure at once (Perhaps as a comma delimited parameter). This has the downside of being limited to the max parameter size, and the slow performance of a UDF to split the string into a temporary table. I'm almost tempted to mirror everything into lucenes index, so that I can periodicly generate the index from the backing store, but only need to access it for the frontend. Advice?

    Read the article

  • How to actually query Chinese address in Googlemap API geocoding??

    - by Robert
    I'm following the demo code from article of phpsqlgeocode.html In the db, I inserted some Chinese addresses, which is utf8 encode. I found after urlencode the Chinese address, the output of the address will be wrong.Like this one: http://maps.google.com.tw/maps/geo?output=csv&key=ABQIAAAAfG3KxFZXjEslq8VNxMBpKRR08snBovzCxLQZ9DWwpnzxH-ROPxSAS9Q36m-6OOy0qlwTL6Ht9qp87w&q=%3F%3F%3F%3F%3F%3F%3F%3F%3F132%3F Then output(can't query from php, it have to test as browser url link), 200,5,59.3266963,18.2733433 Whose address is actually located in Taichung Taiwan, but turn out to in Sweden Europe. But when I paste the Chinese address(such as ???????? ?131?56?58?60?) in the url, the result turn out to be fine!!! So my question is how to make sure it send out the original Chiness address?? how to prevent urlencode()??? I found take urlencode() away not change anything. (I've change the MAPS_HOST from maps.google.com to maps.google.com.tw.) (I'm sure my key is right, and other English address geocoding are fine.) Thanks!!

    Read the article

  • SQL Server Snapshot Replication Subscriber (Editable or Read-Only)

    - by NateReid
    I need to create a copy of my SQL 2008 R2 Enterprise database and have it located on the same server as the original. I will be using this second copy of the database as the target of a mostly read-only website. I understand that if I create this copy of the database using snapshot replication that all data changes in the subscriber database will be overwriten in the event of the next replication. The web application will try to write to this database to record login attempts, etc and will fail if its source database is read-only. In my case I do not need to keep these auditing records and they can therefore be overwriten each time a new snapshot is applied. My question is whether SQL Server forces the subcriber database to be read-only and is there any way around this? Thank you, Nate

    Read the article

  • How to backup database dumps which only minor changes between the backups?

    - by wurlog
    I have a mysql dumpfile every night and i every one of them. Each is 14 GB in size, so my backupdrive will be full soon. The difference from one day to another are only some 100MB. How make the daily backups without wasting a lot of space. PS: i used tar to compress 1 file and the size went down to 5GB. I hoped that when i compress two files the ratio would be better, but no. 2 dumps compressed are 10Gb

    Read the article

  • Migrating to new dovecot server; Dovecot fails to authenticate using old password database

    - by Ironlenny
    I am migrating my companies intranet from a OS X server to an Ubuntu 12.04 server. We use a flat file to store user names and passwords hashs. This file is used by Apache and Dovecot to authenticate users. The Ubuntu server is running Dovecot 2.0 while the OS X server is running Dovecot 1.2. I have already migrated WebDav which uses Apache for authentication. Authentication works. I'm in the process of migrating our Prosody server which uses Dovecot for authentiation. Dovecot is up and running, but when I test authentication using either telnet a login username password or doveadm sudo doveadm auth username, I get dovecot: auth: passwd-file(username): unknown user dovecot: auth: Debug: client out: FAIL#0111#011user=username in my log file. I can use sudo dovecot user username to perform a user lookup and it will return the user's info. I can generate a password hash locally and Dovecot will authenticate the test password just fine.

    Read the article

  • Database Schema for survey polling application with a default choice.

    - by user156814
    I have a survey application, where users can create surveys and give choices for every survey. Other users can choose their answers for the aurvey and then polls are taken to get the results of the survey. I already have the database schema for this Questions id, user_id, category_id, question_text, date_started Answers id, user_id, question_id, choice_id, explanation, date_added Choices id, question_id, choice_text As for now, users can choose their own choice answers to their surveys... but I want to be able to add a default "I dont care" or "I dont know" choice to every survey for people who simply dont care about the topic to take sides or who cant choose. So lets say theres a survey that asks who was a better president, George W. Bush, Bill Clinton, Ronald Reagon, or Richard Nixon... I want to be able to add a default "I dont care" option. I was thinking to just add that extra choice EVERY TIME a user creates a survey, but then I wouldn't have much control over the text for that choice after that survey has been created, and I want to know if theres a better way to do this, like create another table or something Thanks

    Read the article

  • Moving Portable Office. Have a LAN & Phone line query?

    - by John Smith
    We are about to move a portable office that has 8 phones and 12 lan connections. They are all wired back to our main switch and Nortel bcm200 phone system which are only about 20m (65ft) away. After the move the office will be 180m (600ft) away from these. What is the maximum length of cable a digital phone line can be? I am aware that a lan connection can only be 100m (305ft) when using cat5e utp. Does this rule apply to phone connections also? If so how can I extend beyond 100m for the phones? I was going to install about three cabinets, 3 switches and 6 patch panels for the lan connections but the ideal struck me tonight that maybe I could run a fibre optic line. Would this be feasible? Any feedback on this is greatly appreciated. Thanks

    Read the article

  • Viewing a large field in a query in SQL management studio with ZOOM?

    - by smithym
    Hi there, Can anyone help? I am using SQL management studio (sql server 2008) to run queries and some of the fields that come back are varchar(max) for example and it has a lot of information - Is there a zoom feature to open the window and show me the contents with vertical and horizontal scrollbars? I remember there was, i thought it was F2 but i must have been mistaken as it doesn't work Now i have to scroll horizontal on the field and its really difficult to see everything Also some of the fields contain new line codes etc so it would be great if the zoom feature would display the info using the new line codes etc Any body know how to do this?

    Read the article

  • Is there a jdbc driver for SQL Server that can search for database instances on network?

    - by djangofan
    Is there a jdbc driver for SQL Server that can search for database instances on network? Just wanting to emulate "OSQL -L" from the JDBC driver. Dont want to have to call OSQL from JNI or something. I also don't want to have to write my own code for scanning a UDP port. I want my Java application to be able to search for live SQL Servers and prompt me with a list of available servers. Isn't this a reasonable expectation for a JDBC driver? OSQL.exe can do it and so why not the JDBC driver?

    Read the article

  • Is there a more easy way to create a WCF/OData Data Service Query Provider?

    - by routeNpingme
    I have a simple little data model resembling the following: InventoryContext { IEnumerable<Computer> GetComputers() IEnumerable<Printer> GetPrinters() } Computer { public string ComputerName { get; set; } public string Location { get; set; } } Printer { public string PrinterName { get; set; } public string Location { get; set; } } The results come from a non-SQL source, so this data does not come from Entity Framework connected up to a database. Now I want to expose the data through a WCF OData service. The only way I've found to do that thus far is creating my own Data Service Query Provider, per this blog tutorial: http://blogs.msdn.com/alexj/archive/2010/01/04/creating-a-data-service-provider-part-1-intro.aspx ... which is great, but seems like a pretty involved undertaking. The code for the provider would be 4 times longer than my whole data model to generate all of the resource sets and property definitions. Is there something like a generic provider in between Entity Framework and writing your own data source from zero? Maybe some way to build an object data source or something, so that the magical WCF unicorns can pick up my data and ride off into the sunset without having to explicitly code the provider?

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >