Search Results

Search found 25852 results on 1035 pages for 'linq query syntax'.

Page 498/1035 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Using Recursive SQL and XML trick to PIVOT(OK, concat) a "Document Folder Structure Relationship" table, works like MySQL GROUP_CONCAT

    - by Kevin Shyr
    I'm in the process of building out a Data Warehouse and encountered this issue along the way.In the environment, there is a table that stores all the folders with the individual level.  For example, if a document is created here:{App Path}\Level 1\Level 2\Level 3\{document}, then the DocumentFolder table would look like this:IDID_ParentFolderName1NULLLevel 121Level 232Level 3To my understanding, the table was built so that:Each proposal can have multiple documents stored at various locationsDifferent users working on the proposal will have different access level to the folder; if one user is assigned access to a folder level, she/he can see all the sub folders and their content.Now we understand from an application point of view why this table was built this way.  But you can quickly see the pain this causes the report writer to show a document link on the report.  I wasn't surprised to find the report query had 5 self outer joins, which is at the mercy of nobody creating a document that is buried 6 levels deep, and not to mention the degradation in performance.With the help of 2 posts (at the end of this post), I was able to come up with this solution:Use recursive SQL to build out the folder pathUse SQL XML trick to concat the strings.Code (a reminder, I built this code in a stored procedure.  If you copy the syntax into a simple query window and execute, you'll get an incorrect syntax error) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} -- Get all folders and group them by the original DocumentFolderID in PTSDocument table;WITH DocFoldersByDocFolderID(PTSDocumentFolderID_Original, PTSDocumentFolderID_Parent, sDocumentFolder, nLevel)AS (-- first member      SELECT 'PTSDocumentFolderID_Original' = d1.PTSDocumentFolderID            , PTSDocumentFolderID_Parent            , 'sDocumentFolder' = sName            , 'nLevel' = CONVERT(INT, 1000000)      FROM (SELECT DISTINCT PTSDocumentFolderID                  FROM dbo.PTSDocument_DY WITH(READPAST)            ) AS d1            INNER JOIN dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)                  ON d1.PTSDocumentFolderID = df1.PTSDocumentFolderID      UNION ALL      -- recursive      SELECT ddf1.PTSDocumentFolderID_Original            , df1.PTSDocumentFolderID_Parent            , 'sDocumentFolder' = df1.sName            , 'nLevel' = ddf1.nLevel - 1      FROM dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)            INNER JOIN DocFoldersByDocFolderID AS ddf1                  ON df1.PTSDocumentFolderID = ddf1.PTSDocumentFolderID_Parent)-- Flatten out folder path, DocFolderSingleByDocFolderID(PTSDocumentFolderID_Original, sDocumentFolder)AS (SELECT dfbdf.PTSDocumentFolderID_Original            , 'sDocumentFolder' = STUFF((SELECT '\' + sDocumentFolder                                         FROM DocFoldersByDocFolderID                                         WHERE (PTSDocumentFolderID_Original = dfbdf.PTSDocumentFolderID_Original)                                         ORDER BY PTSDocumentFolderID_Original, nLevel                                         FOR XML PATH ('')),1,1,'')      FROM DocFoldersByDocFolderID AS dfbdf      GROUP BY dfbdf.PTSDocumentFolderID_Original) And voila, I use the second CTE to join back to my original query (which is now a CTE for Source as we can now use MERGE to do INSERT and UPDATE at the same time).Each part of this solution would not solve the problem by itself because:If I don't use recursion, I cannot build out the path properly.  If I use the XML trick only, then I don't have the originating folder ID info that I need to link to the document.If I don't use the XML trick, then I don't have one row per document to show in the report.I could conceivably do this in the report function, but I'd rather not deal with the beginning or ending backslash and how to attach the document name.PIVOT doesn't do strings and UNPIVOT runs into the same problem as the above.I'm excited that each version of SQL server provides us new tools to solve old problems and/or enables us to solve problems in a more elegant wayThe 2 posts that helped me along:Recursive Queries Using Common Table ExpressionHow to use GROUP BY to concatenate strings in SQL server?

    Read the article

  • Executing Stored Procedures in Visual Studio LightSwitch.

    - by dataintegration
    A LightSwitch Project is very easy way to visualize and manipulate information directly from one of our ADO.NET Providers. But when it comes to executing the Stored Procedures, it can be a bit more complicated. In this article, we will demonstrate how to execute a Stored Procedure in LightSwitch. For the purposes of this article, we will be using the RSSBus Email Data Provider, but the same process will work with any of our ADO.NET Providers. Creating the RIA Service. Step 1: Open Visual Studio and create a new WCF RIA Service Class Project. Step 2:Add the reference to the RSSBus Email Data Provider dll in the (ProjectName).Web project. Step 3: Add a new Domain Service Class to the (ProjectName).Web project. Step 4: In the new Domain Service Class, create a new class with the attributes needed for the Stored Procedure's parameters. In this demo, the Stored Procedure we are executing is called SendMessage. The parameters we will need are as follows: public class NewMessage{ [Key] public int ID { get; set; } public string FromEmail { get; set; } public string ToEmail { get; set; } public string Subject { get; set; } public string Text { get; set; } } Note: The created class must have an ID which will serve as the key value. Step 5: Create a new method that will executed when the insert event fires. Inside this method you can use the standards ADO.NET code which will execute the stored procedure. [Insert] public void SendMessage(NewMessage newMessage) { try { EmailConnection conn = new EmailConnection(connectionString); EmailCommand comm = new EmailCommand("SendMessage", conn); comm.CommandType = System.Data.CommandType.StoredProcedure; if (!newMessage.FromEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@From", newMessage.FromEmail)); if (!newMessage.ToEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@To", newMessage.ToEmail)); if (!newMessage.Subject.Equals("")) comm.Parameters.Add(new EmailParameter("@Subject", newMessage.Subject)); if (!newMessage.Text.Equals("")) comm.Parameters.Add(new EmailParameter("@Text", newMessage.Text)); comm.ExecuteNonQuery(); } catch (Exception exc) { Console.WriteLine(exc.Message); } } Step 6: Create a query method. We are not going to be using getNewMessages(), so it does not matter what it returns for the purpose of our example, but you will need to create a method for the query event as well. [Query(IsDefault=true)] public IEnumerable<NewMessage> getNewMessages() { return null; } Step 7: Rebuild the whole solution. Creating the LightSwitch Project. Step 8: Open Visual Studio and create a new LightSwitch Application Project. Step 9: On the Data Sources, add a new data source. Choose a WCF RIA Service Step 10: Choose to add a new reference and select the (Project Name).Web.dll generated from the RIA Service. Step 11: Select the entities you would like to import. In this case, we are using the recently created NewMessage entity. Step 13: On the Screens section, create a new screen and select the NewMessage entity as the Screen Data. Step 14: After you run the project, you will be able to add a new record and save it. This will execute the Stored Procedure and send the new message. If you create a screen to check the sent messages, you can refresh this screen to see the mail you sent. Sample Project To help you with get started using stored procedures in LightSwitch, download the fully functional sample project. You will also need the RSSBus Email Data Provider to make the connection. You can download a free trial here.

    Read the article

  • Using parameters in reports for VIsual Studio 2008

    - by Jim Thomas
    This is my first attempt to create a Visual Studio 2008 report using parameters. I have created the dataset and the report. If I run it with a hard-coded filter on a column the report runs fine. When I change the filter to '?' I keep getting this error: No overload for method 'Fill' takes '1' argument Obviously I am missing some way to connect the parameter on the dataset to a report parameter. I have defined a report parameter using the Report/Report Parameter screen. But how does that report parameter get tied to the dataset table parameter? Is there a special naming convention for the parameter? I have Googled this a half dozen times and read the msdn documentation but the examples all seem to use a different approach (like creating a SQL query rather then a table based dataset) or entering the parameter name as "=Parameters!name.value" but I can't figure out where to do that. One msdn example suggestted I needed to create some C# code using a SetParameters() method to make the connection. Is that how it is done? If anyone can recommend a good walk-through I'd appreciate it. Edit: After more reading it appears I don't need report parameters at all. I am simply trying to add a parameter to the database query. So I would create a text box on the form, get the user's input, then apply that parameter programmatically to the fill() argument list. The report parameter on the other hand is an ad-hoc value generally entered by a user that you want to appear on the report. But there is no relationship between report parameters and query/dataset parameters. Is that correct?

    Read the article

  • Using Perl WWW::Facebook::API to Publish To Facebook Newsfeed

    - by Russell C.
    We use Facebook Connect on our site in conjunction with the WWW::Facebook::API CPAN module to publish to our users newsfeed when requested by the user. So far we've been able to successfully update the user's status using the following code: use WWW::Facebook::API; my $facebook = WWW::Facebook::API->new( desktop => 0, api_key => $fb_api_key, secret => $fb_secret, session_key => $query->cookie($fb_api_key.'_session_key'), session_expires => $query->cookie($fb_api_key.'_expires'), session_uid => $query->cookie($fb_api_key.'_user') ); my $response = $facebook->stream->publish( message => qq|Test status message|, ); However, when we try to update the code above so we can publish newsfeed stories that include attachments and action links as specified in the Facebook API documentation for Stream.Publish, we have tried about 100 different ways without any success. According to the CPAN documentation all we should have to do is update our code to something like the following and pass the attachments & action links appropriately which doesn't seem to work: my $response = $facebook->stream->publish( message => qq|Test status message|, attachment => $json, action_links => [@links], ); For example, we are passing the above arguments as follows: $json = qq|{ 'name': 'i\'m bursting with joy', 'href': ' http://bit.ly/187gO1', 'caption': '{*actor*} rated the lolcat 5 stars', 'description': 'a funny looking cat', 'properties': { 'category': { 'text': 'humor', 'href': 'http://bit.ly/KYbaN'}, 'ratings': '5 stars' }, 'media': [{ 'type': 'image', 'src': 'http://icanhascheezburger.files.wordpress.com/2009/03/funny-pictures-your-cat-is-bursting-with-joy1.jpg', 'href': 'http://bit.ly/187gO1'}] }|; @links = ["{'text':'Link 1', 'href':'http://www.link1.com'}","{'text':'Link 2', 'href':'http://www.link2.com'}"]; The above, nor any of the other representations we tried seem to work. I'm hoping some other perl developer out there has this working and can explain how to create the attachment and action_links variables appropriately in Perl for posting to the Facebook news feed through WWW::Facebook::API. Thanks in advance for your help!

    Read the article

  • MVC2 and MVC Futures causing RedirectToAction issues

    - by Darragh
    I've been trying to get the strongly typed version of RedirectToAction from the MVC Futures project to work, but I've been getting no where. Below are the steps I've followed, and the errors I've encountered. Any help is much appreciated. I created a new MVC2 app and changed the About action on the HomeController to redirect to the Index page. Return RedirectToAction("Index") However, I wanted to use the strongly typed extensions, so I downloaded the MVC Futures from CodePlex and added a reference to Microsoft.Web.Mvc to my project. I addded the following "import" statement to the top of HomeContoller.vb Imports Microsoft.Web.Mvc I commented out the above RedirectToAction and added the following line: Return RedirectToAction(Of HomeController)(Function(c) c.Index()) So far, so good. However, I noticed if I uncomment out the first (non Generic) RedirectToAction, it was now causing the following compile error: Error 1 Overload resolution failed because no accessible 'RedirectToAction' can be called with these arguments: Extension method 'Public Function RedirectToAction(Of TController)(action As System.Linq.Expressions.Expression(Of System.Action(Of TController))) As System.Web.Mvc.RedirectToRouteResult' defined in 'Microsoft.Web.Mvc.ControllerExtensions': Data type(s) of the type parameter(s) cannot be inferred from these arguments. Specifying the data type(s) explicitly might correct this error. Extension method 'Public Function RedirectToAction(action As System.Linq.Expressions.Expression(Of System.Action(Of HomeController))) As System.Web.Mvc.RedirectToRouteResult' defined in 'Microsoft.Web.Mvc.ControllerExtensions': Value of type 'String' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Action(Of mvc2test1.HomeController))'. Even though intelli-sense was showing 8 overloads (the original 6 non-generic overloads, plus the 2 new generic overloads from the Futures assembly), it seems when trying to complie the code, the compiler would only 'find' the 2 non-gneneric extension methods from the Futures assessmbly. I thought this might be an issue that I was using conflicting versions of the MVC2 assembly, and the futures assembly, so I added MvcDiaganotics.aspx from the Futures download to my project and everytyhing looked correct: ASP.NET MVC Assembly Information (System.Web.Mvc.dll) Assembly version: ASP.NET MVC 2 RTM (2.0.50217.0) Full name: System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 Code base: file:///C:/WINDOWS/assembly/GAC_MSIL/System.Web.Mvc/2.0.0.0__31bf3856ad364e35/System.Web.Mvc.dll Deployment: GAC-deployed ASP.NET MVC Futures Assembly Information (Microsoft.Web.Mvc.dll) Assembly version: ASP.NET MVC 2 RTM Futures (2.0.50217.0) Full name: Microsoft.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=null Code base: file:///xxxx/bin/Microsoft.Web.Mvc.DLL Deployment: bin-deployed This is driving me crazy! Becuase I thought this might be some VB issue, I created a new MVC2 project using C# and tried the same as above. I added the following "using" statement to the top of HomeController.cs using Microsoft.Web.Mvc; This time, in the About action method, I could only manage to call the non-generic RedirectToAction by typing the full commmand as follows: return Microsoft.Web.Mvc.ControllerExtensions.RedirectToAction<HomeController>(this, c => c.Index()); Even though I had a "using" statement at the top of the class, if I tried to call the non-generic RedirectToAction as follows: return RedirectToAction<HomeController>(c => c.Index()); I would get the following compile error: Error 1 The non-generic method 'System.Web.Mvc.Controller.RedirectToAction(string)' cannot be used with type arguments What gives? It's not like I'm trying to do anything out of the ordinary. It's a simple vanilla MVC2 project with only a reference to the Futures assembly. I'm hoping that I've missed out something obvious, but I've been scratching my head for too long, so I figured I'd seek some assisstance. If anyone's managed to get this simple scenario working (in VB and/or C#) could they please let me know what, if anything, they did differently? Thanks!

    Read the article

  • Using OLEDB parameters in .NET when connecting to an AS400/DB2

    - by Jeff Stock
    I have been pulling my hair out trying to figure out what I can't get parameters to work in my query. I have the code written in VB.NET trying to do a query to an AS/400. I have IBM Access for Windows installed and I am able to get queries to work, just not with parameters. Any time I include a parameter in my query (ex. @MyParm) it doesn't work. It's like it doesn't replace the parameter with the value it should be. Here's my code: I get the following error: SQL0206: Column @MyParm not in specified tables Here's my code: Dim da As New OleDbDataAdapter Dim dt As New DataTable da.SelectCommand = New OleDbCommand da.SelectCommand.Connection = con da.SelectCommand.CommandText = "SELECT * FROM MyTable WHERE Col1 = @MyParm" With da.SelectCommand.Parameters .Add("@MyParm", OleDbType.Integer, 9) .Item("@MyParm").Value = 5 End With ' I get the error here of course da.Fill(dt) I can replace @MyParm with a literal of 5 and it works fine. What am I missing here? I do this with SQL Server all the time, but this is the first time I am attempting it on an AS400.

    Read the article

  • Jqgrid + CodeIgniter

    - by Ivan
    I tried to make jqgrid work with codeigniter, but I could not do it, I only want to show the data from the table in json format... but nothing happens.. but i dont know what i am doing wrong, i cant see the table with the content i am calling. my controller class Grid extends Controller { public function f() { $this->load->model('dbgrid'); $var['grid'] = $this->dbgrid->getcontentfromtable(); foreach($var['grid'] as $row) { $responce->rows[$i]['id']=$row->id; $responce->rows[$i]['cell']=array($row->id,$row->id_catalogo); } $json = json_encode($responce); $this->load->view('vgrid',$json); } function load_view_grid() { $this->load->view('vgrid'); } } my model class Dbgrid extends Model{ function getcontentfromtable() { $sql = 'SELECT * FROM anuncios'; $query = $this->db->query($sql); $result = $query->result(); return $result; } } my view(script) $(document).ready(function() { jQuery("#list27").jqGrid({ url:'http://localhost/sitio/index.php/grid/f', datatype: "json", mtype: "post", height: 255, width: 600, colNames:['ID','ID_CATALOGO'], colModel:[ {name:'id',index:'id', width:65, sorttype:'int'}, {name:'id_catalogo',index:'id_catalogo', sorttype:'int'} ], rowNum:50, rowTotal: 2000, rowList : [20,30,50], loadonce:true, rownumbers: true, rownumWidth: 40, gridview: true, pager: '#pager27', sortname: 'item_id', viewrecords: true, sortorder: "asc", caption: "Loading data from server at once" }); }); hope someone help me

    Read the article

  • Dealing with huge SQL resultset

    - by Dave McClelland
    I am working with a rather large mysql database (several million rows) with a column storing blob images. The application attempts to grab a subset of the images and runs some processing algorithms on them. The problem I'm running into is that, due to the rather large dataset that I have, the dataset that my query is returning is too large to store in memory. For the time being, I have changed the query to not return the images. While iterating over the resultset, I run another select which grabs the individual image that relates to the current record. This works, but the tens of thousands of extra queries have resulted in a performance decrease that is unacceptable. My next idea is to limit the original query to 10,000 results or so, and then keep querying over spans of 10,000 rows. This seems like the middle of the road compromise between the two approaches. I feel that there is probably a better solution that I am not aware of. Is there another way to only have portions of a gigantic resultset in memory at a time? Cheers, Dave McClelland

    Read the article

  • Problem retrieving Strings from varbinary columns using HIbernate and MySQL

    - by user303396
    Hello, Here's my scenario. I save a bunch of Strings containing asian characters in MySQL using Hibernate. These strings are written in varbinary columns. Everything works fine during the saving operation. The DB contains the correct values (sequence of bytes). If I query (again using Hibernate) for the Strings that I saved I get the correct results. But when Hibernate fills the entity to which the Strings belong with the values from the DB I get different values then the ones I used in the query that retrieved them. Instead of receiving the correct values I receive a bunch of FFFD replacement characters. For example: if I store "??" in the DB and then I query for it, the resulting String will be \uFFFD\uFFFD\uFFFD\uFFFD\uFFFD\uFFFD. the DB connection has the following parameters set useUnicode=true&characterEncoding=UTF-8, I've tried using the true UTF-8 configurations for Hibernate but that didn't solve the problem What am I missing? Any suggestions? Thanks

    Read the article

  • Lucene.NET 2.9 and BitArray/DocIdSet

    - by Paul Knopf
    I found a great example on grabbing facet counts on a base query. It stores the bitarray of the base query to improve the performance each time the a facet gets counted. var genreQuery = new TermQuery(new Term("genre", genre)); var genreQueryFilter = new QueryFilter(genreQuery); BitArray genreBitArray = genreQueryFilter.Bits(searcher.GetIndexReader()); Console.WriteLine("There are " + GetCardinality(genreBitArray) + " document with the genre " + genre); // Next perform a regular search and get its BitArray result Query searchQuery = MultiFieldQueryParser.Parse(term, new[] {"title", "description"}, new[] {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD}, new StandardAnalyzer()); var searchQueryFilter = new QueryFilter(searchQuery); BitArray searchBitArray = searchQueryFilter.Bits(searcher.GetIndexReader()); Console.WriteLine("There are " + GetCardinality(searchBitArray) + " document containing the term " + term); The only problem is that I am using a newer version of Lucene.NET (2.9) and Filter.Bits is obsolete. We are told to use DocIdSet instead (rather than BitArray). I cannot found out how to do the bitArray.And(bitArray) with a docIdSet. I looked in reflector and found OpenIdSet which has And operations. Not sure if OpenIdSet is the route to go, I'm just stating. Thanks in advance!

    Read the article

  • Add a :filter param to the end of a url

    - by bob
    Hello, I am building an application that requires a search to go deeper than the initial level. I am using nested scopes to accomplish this and params in the URL For example, when a user searches for "handbag" it creates a query like the following. http://localhost:3000/junks?search=handbag&condition=&category=&main_submit=Go! I want to add a :filter param to the end of this url and reload the page with the new url. http://localhost:3000/junks?search=handbag&condition=&category=&main_submit=Go!&filter=lowest_price As shown above the &filter=lowest_price is added to the query. I have already written controller code to handle this and I know it works as long as the query is like the 2nd link above. my views code <div class = "filter_options"> <p> <strong>Filter by:</strong> <%= link_to_unless_current "Recent", :filter => 'recent' %> | <%= link_to_unless_current "Lowest Price", :filter => 'lowest_price' %> | <%= link_to_unless_current "Highest Price", :filter => 'highest_price' %> </p> </div> This is how I have it currently, which works if my URL does not have a search string attached to it. Unfortunately, this will go to a http://localhost:3000/junks?filter=lowest_price even if there is a search string. I would like to know how to create my links so that they add onto the search string shown in the 2nd code example. Additionally, if a search string with a filter is already present, it would only change that filter and resubmit the search string with the new filter. I hope I am being clear.

    Read the article

  • Iterating arrays in a batch file

    - by dboarman-FissureStudios
    I am writing a batch file (I asked a question on SU) to iterate over terminal servers searching for a specific user. So, I got the basic start of what I'm trying to do. Enter a user name Iterate terminal servers Display servers where user is found (they can be found on multiple servers now and again depending on how the connection is lost) Display a menu of options Iterating terminal servers I have: for /f "tokens=1" %%Q in ('query termserver') do (set __TermServers.%%Q) Now, I am getting the error... Environment variable __TermServers.SERVER1 not defined ...for each of the terminal servers. This is really the only thing in my batch file at this point. Any idea on why this error is occurring? Obviously, the variable is not defined, but I understood the SET command to do just that. I'm also thinking that in order to continue working on the iteration (each terminal server), I will need to do something like: :Search for /f "tokens=1" %%Q in ('query termserver') do (call Process) goto Break :Process for /f "tokens=1" %%U in ('query user %%username%% /server:%%Q') do (set __UserConnection = %%C) goto Search However, there are 2 things that bug me about this: Is the %%Q value still alive when calling Process? When I goto Search, will the for-loop be starting over? I'm doing this with the tools I have at my disposal, so as much as I'd like to hear about PowerShell and other ways to do this, it would be futile. I have notepad and that's it. Note: I would continue this line of questions on SuperUser, except that it seems to be getting more into programming specifics.

    Read the article

  • Laravel - Paginate and get()

    - by Bajongskie
    With the code below, what I wanted was paginate the query I created. But, when I try to add paginate after get, it throws an error. I wanted to remain get since I want to limit to columns that was set on $fields. What would should be the better idea to paginate this thing? or what's a good substitute for get and limit the columns? What I tried: ->get($this->fields)->paginate($this->limit) Part of my controller: class PhonesController extends BaseController { protected $limit = 5; protected $fields = array('Phones.*','manufacturers.name as manufacturer'); /** * Display a listing of the resource. * * @return Response */ public function index() { if (Request::query("str")) { $phones = Phone::where("model", 'LIKE', '%'. Request::query('str') . '%') ->join('manufacturers', 'manufacturers_id', '=', 'manufacturers.id') ->get($this->fields); } else { $phones = Phone::join('manufacturers', 'manufacturers_id', '=', 'manufacturers.id') ->get($this->fields); } return View::make('phones.index')->with('phones', $phones); } }

    Read the article

  • Entity Framework, Foreign Keys and EntityKeys

    - by Greg
    I've got a problem with the Entity Framework and the ForeignKeys. I've got a table "BC_Message_Assets" which has 3 FK's (MessageId, AssetId and StatusId). I create my "MessageAsset" like this MessageAsset messageAsset = new MessageAsset(); messageAsset.MessageStatusReference.EntityKey = new EntityKey("MyEntities.MessageStatusSet", "Id", 1); messageAsset.AssetReference.EntityKey = new EntityKey("MyEntities.AssetSet", "Id", 1); messageAsset.MessageReference.EntityKey = new EntityKey("MyEntities.MessageSet", "Id", messageId); context.AddToMessageAssetSet(messageAsset); context.SaveChanges(); But I got the following exception : The INSERT statement conflicted with the FOREIGN KEY constraint \"FK_BC_Message_Assets_BC_Assets\". The conflict occurred in database \"Test\", table \"dbo.BC_Assets\", column 'Id'.\r\nThe statement has been terminated. When I look at the query, I notice that the parameter value for AssetId is "0" despite that I provided "1" to the EntityKey. Here's the generated query : exec sp_executesql N'insert [dbo].[BC_Message_Assets]([MessageId], [AssetId], [CompletionTime], [StatusId]) values (@0, @1, null, @2) ',N'@0 int,@1 int,@2 int',@0=47,@1=0,@2=1 I can't explain what happens. I hardcoded "1" in the EntityKey and I received "0" in the query ?

    Read the article

  • Hibernate JPA 2.0 CriteriaQuery, subset listing and counting at once

    - by Jeroen
    Hello, I recently started using the new Hibernate (EntityManager) 3.5.1 with JPA 2.0, and I was wondering if it was possible to both retrieve a (sub-set) of entities and their count from a single CriteriaQuery instance. My current implementation looks as follows: class HibernateResult<T> extends AbstractResult<T> { /** * Construct a new {@link HibernateResult}. * @param criteriaQuery the criteria query * @param selector the selector that determines the entities to return */ HibernateResult(CriteriaQuery<T> criteriaQuery, Selector selector, EntityManager entityManager) { CriteriaBuilder builder = entityManager.getCriteriaBuilder(); // Count the entities CriteriaQuery<Long> countQuery = builder.createQuery(Long.class); Root<T> path = criteriaQuery.from(criteriaQuery.getResultType()); countQuery.select(builder.count(path)); final int count = entityManager.createQuery(countQuery).getSingleResult().intValue(); this.setCount(count); // List the entities according to selector TypedQuery<T> entityQuery = entityManager.createQuery(criteriaQuery); entityQuery.setFirstResult(selector.getFirstResult()); entityQuery.setMaxResults(selector.getMaxRecords()); List<T> entities = entityQuery.getResultList(); this.setEntities(entities); } } The thing is that I want to count all entities that match my criteria query, but the count method from CriteriaBuilder only seems to take Expression as argument. Is there any quick way of converting my criteria query to an expression?

    Read the article

  • Crawler do not create custom crawled properties

    - by user173739
    These days i have faced with very strange problem. I have development environment with MOSS 2007 SP 2 and WS 2008, i have search configured and everything works great. I have started to configuring staging environment (MOSS 2007 SP2 with June CU) and create new farm and new SSP. I have deployed my changes with package (wsp) and manually create site collections, sub webs, pages and so on. When fill crawl finishes, i see in Crawl log that all my pages have been successfully crawled and when i use some test tools to query search, my pages have been found. In crawl log there is few errors like http://mysite/sites/de/pages "The crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly..", but all pages in this Page library were indexed. The problem is that i use custom managed properties (mapped to custom crawled properties) in search queries, but crawler didn't create crawled properties for all my new site columns. For example for site column IsAccent the crawler didn't create cralwed property ows_isAccesnt. I'm sure that i have created pages for specific content type and all my crawl categories have checked "Automatically discover new properties when a crawl takes place ". In site settings - Searchable columns i haven't got any column selected as Nocrowl. I tried to export my managed and crawled properties from dev environment to stage evironment but all my managed properties were empty, after that i recreated SSP...the result was the same... I checked specific page with tools like Sharepoint Manager 2007 and U2U Caml Query Builder 2007 that content type is correct, and i can see values of my custom site collumns.... Using U2U Caml Query Builder 2007 agains some Page library in Result tab i can see ows_IsAccent (my site collumn is IsAccent) and others site columns, but i can't find them in Crawled properties. Any idias?

    Read the article

  • General Web Programming/designing Question: ?

    - by Prasad
    hi, I have been in web programming for 2 years (Self taught - a biology researcher by profession). I designed a small wiki with needed functionalities and a scientific RTE - ofcourse lot is expected. I used mootools framework and AJAX extensively. I was always curious when ever I saw the query strings passed from URL. Long encrypted query string directly getting passed to the server. Especially Google's design is such. I think this is the start of providing a Web Service to a client - I guess. Now, my question is : is this a special, highly professional, efficient / advanced web design technique to communicate queries via the URL ? I always felt that direct URL based communication is faster. I tried my bit and could send a query through the URL directly. here is the link: http://sgwiki.sdsc.edu/getSGMPage.php?8 By this , the client can directly link to the desired page instead of searching and / or can automate. There are many possibilities. The next request: Can I be pointed to such technique of web programming? oops: I am sorry, If I have not been able to convey my request clearly. Prasad.

    Read the article

  • C# and Metadata File Errors

    - by j-t-s
    Hi All I've created my own little c# compiler using the tutorial on MSDN, and it's not working properly. I get a few errors, then I fix them, then I get new, different errors, then I fix them, etc etc. The latest error is really confusing me. --------------------------- --------------------------- Line number: 0, Error number: CS0006, 'Metadata file 'System.Linq.dll' could not be found; --------------------------- OK --------------------------- I do not know what this means. Can somebody please explain what's going on here? Here is my code. MY SAMPLE C# COMPILER CODE: using System; namespace JTM { public class CSCompiler { protected string ot, rt, ss, es; protected bool rg, cg; public string Compile(String se, String fe, String[] rdas, String[] fs, Boolean rn) { System.CodeDom.Compiler.CodeDomProvider CODEPROV = System.CodeDom.Compiler.CodeDomProvider.CreateProvider("CSharp"); ot = fe; System.CodeDom.Compiler.CompilerParameters PARAMS = new System.CodeDom.Compiler.CompilerParameters(); // Ensure the compiler generates an EXE file, not a DLL. PARAMS.GenerateExecutable = true; PARAMS.OutputAssembly = ot; foreach (String ay in rdas) { if (ay.Contains(".dll")) PARAMS.ReferencedAssemblies.Add(ay); else { string refd = ay; refd = refd + ".dll"; PARAMS.ReferencedAssemblies.Add(refd); } } System.CodeDom.Compiler.CompilerResults rs = CODEPROV.CompileAssemblyFromFile(PARAMS, fs); if (rs.Errors.Count > 0) { foreach (System.CodeDom.Compiler.CompilerError COMERR in rs.Errors) { es = es + "Line number: " + COMERR.Line + ", Error number: " + COMERR.ErrorNumber + ", '" + COMERR.ErrorText + ";" + Environment.NewLine + Environment.NewLine; } } else { // Compilation succeeded. es = "Compilation Succeeded."; if (rn) System.Diagnostics.Process.Start(ot); } return es; } } } ... And here is the app that passes the code to the above class: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace WindowsFormsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { string[] f = { "Form1.cs", "Form1.Designer.cs", "Program.cs" }; string[] ra = { "System.dll", "System.Windows.Forms.dll", "System.Data.dll", "System.Drawing.dll", "System.Deployment.dll", "System.Xml.dll", "System.Linq.dll" }; JTS.CSCompiler CSC = new JTS.CSCompiler(); MessageBox.Show(CSC.Compile( textBox1.Text, @"Test Application.exe", ra, f, false)); } } } So, as you can see, all the using directives are there. I don't know what this error means. Any help at all is much appreciated. Thank you

    Read the article

  • Deleting JPA entity containing @CollectionOfElements throws ConstraintViolationException

    - by Lyle
    I'm trying to delete entities which contain lists of Integer, and I'm getting ConstraintViolationExceptions because of the foreign key on the table generated to hold the integers. It appears that the delete isn't cascading to the mapped collection. I've done quite a bit of searching, but all of the examples I've seen on how to accomplish this are in reference to a mapped collection of other entities which can be annotated; here I'm just storing a list of Integer. Here is the relevant excerpt from the class I'm storing: @Entity @Table(name="CHANGE_IDS") @GenericGenerator( name = "CHANGE_ID_GEN", strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator", parameters = { @Parameter(name="sequence_name", value="course_changes_seq"), @Parameter(name="increment_size", value="5000"), @Parameter(name=" optimizer", value="pooled") } ) @NamedQueries ({ @NamedQuery( name="Changes.getByStatus", query= "SELECT c " + "FROM DChanges c " + "WHERE c.status = :status "), @NamedQuery( name="Changes.deleteByStatus", query= "DELETE " + "FROM Changes c " + "WHERE c.status = :status ") }) public class Changes { @Id @GeneratedValue(generator="CHANGE_ID_GEN") @Column(name = "ID") private final long id; @Enumerated(EnumType.STRING) @Column(name = "STATUS", length = 20, nullable = false) private final Status status; @Column(name="DOC_ID") @org.hibernate.annotations.CollectionOfElements @org.hibernate.annotations.IndexColumn(name="DOC_ID_ORDER") private List<Integer> docIds; } I'm deleting the Changes objects using a @NamedQuery: final Query deleteQuery = this.entityManager.createNamedQuery("Changes.deleteByStatus"); deleteQuery.setParameter("status", Status.POST_FLIP); final int deleted = deleteQuery.executeUpdate(); this.logger.info("Deleted " + deleted + " POST_FLIP Changes");

    Read the article

  • BCP task hangs while executing

    - by user350374
    Hey guys, We have a HPC node that runs some of our tasks in it. I have a task in my .net project that kicks the bcp utility on the HPC node and the output of the query that I have runs into 9 Mb. When the HPC node runs this task the output of the query is dumped into a file and then after it dumps around 5mb of data it suddenly stops dumping any more data and this happens all the time. (Please note this isnt any data issue as its not crashing on a particular row every time). this may or may not be of significance but I dump the data into a different server which has adequate permissions set. I have run the command with the same query directly on the hpc node and on other comps and it gives the right output. I'm running the bcp command as follows: var processInfo = new ProcessStartInfo("bcp.exe", argument) { RedirectStandardOutput = true, RedirectStandardError = true, CreateNoWindow = true, UseShellExecute = false }; var proc = new Process { StartInfo = processInfo, EnableRaisingEvents = true }; proc.Exited += new EventHandler(bcp_log); proc.Start(); proc.WaitForExit(); So my code actually waits for each bcp task to run before it goes ahead as I call it multiple times. FYI to remind you again it only fails when my o/p exceeds a certain no of bytes in this case approx 5mb. Any help is much appreciated.

    Read the article

  • what's the alternative to readonlycollection when using lazy="extra"?

    - by Kunjan
    I am trying to use lazy="extra" for the child collection of Trades I have on my Client object. The trades is an Iset<Trade Trades, exposed as ReadOnlyCollection<Trade because I do not want anyone to modify the collection directly. As a result, I have added AddTrade and RemoveTrade methods. Now I have a Client Search page where I need to show the Trade Count. and on the Client details page I have a tab where I need to show all the trades for the Client in paged gridview. What I want to achieve is, for the search when I say on the client object as client.Trades.Count, nHibernate should only fire a select count(*) query. Hence I am using lazy="extra". But because I am using a ReadOnlyCollection, nHibernate fires a count query & a separate query to load the child collection trades completely. Also, I cannot include the Trades in my initial search request as this would disturb the paging because a counterparty can have n trades which would result in n rows, when I am searching clients only. So the child collections have to be loaded lazily. The second problem is that on the client details page -- Trades grid view, I have enabled paging for performance reasons. But by nature nHibernate loads the entire collection of trades as the user goes from one page to another. Ideally I want to control this by getting only trades specific to the page the user is on. How can I achieve this? I came across this very good article. http://stackoverflow.com/questions/876976/implementing-ipagedlistt-on-my-models-using-nhibernate But I am not sure if this will work for me, as lazy=extra currently doesnt work as expected with the ReadOnlyCollection. So, if I went ahead and implemented the solution this way and further enhanced it by making the List/Set Immutable, will lazy=extra give me the same problem as with ReadOnlyCollections?

    Read the article

  • How effective can this Tagging solution be?

    - by Pablo
    Im working on an image sharing site and want to implement tagging for the images. I've read Questions #20856 and #2504150 I have few concerns with the approach on the questions above. First of all it looks easy to link an image to a tag. However getting images by tag relation is not as easy. Not easy because you will have to get the image-to-tag relation from one table and then make a big query with a bunch of OR statements( one OR for every image). Before i even research the tagging topic i started testing the following method: This tables as examples: Table: Image Columns: ItemID, Title, Tags Table: Tag Columns: TagID, Name The Tags column in the Image table takes a string with multiple tagID from the Tag table enclosed by dashes(-). For example: -65-25-105- Links an image with the TagID 65,25 and 105. With this method i find it easier to get images by tag as i can get the TagID with one query and get all the images with another simple query like: SELECT * FROM Image WHERE Tags LIKE %-65-% So if i use this method for tagging, How effective this is? Is querying by LIKE %-65-% a slow process? What problems can i face in the future?

    Read the article

  • T-SQL Right Joins to ALL Entries inc Selected Column

    - by Pace
    Hi Experts, I have the following Query which produces the output below; SELECT TBLUSERS.USERID, TBLUSERS.ADusername, TBLACCESSLEVELS.ACCESSLEVELID, TBLACCESSLEVELS.AccessLevelName FROM TBLACCESSLEVELS INNER JOIN TBLACCESSRIGHTS ON TBLACCESSLEVELS.ACCESSLEVELID = TBLACCESSRIGHTS.ACCESSLEVELID INNER JOIN TBLUSERS ON TBLACCESSRIGHTS.USERID = TBLUSERS.USERID The output is this; 29 administrator 1 AllUsers 29 administrator 2 JobQueue 29 administrator 3 Telephone Directory Admin 29 administrator 4 Jobqueueadmin 29 administrator 5 UserAdmin 29 administrator 6 Product System 27 alan 1 AllUsers 97 andy 1 AllUsers 26 barry 1 AllUsers 26 barry 2 JobQueue 26 barry 3 Telephone Directory Admin 26 barry 4 Jobqueueadmin 26 barry 5 UserAdmin 26 barry 6 Product System 26 barry 7 Newseditor 26 barry 8 GreetingBoard What I would like to do is modify the query so I get all Access Levels regardless of weather there is an entry for that user. What I would also like to do is some sort of exist case so that I get output like the following; 29 administrator 1 AllUsers True 29 administrator 2 JobQueue True 29 administrator 3 Telephone Directory Admin True 29 administrator 4 Jobqueueadmin True 29 administrator 5 UserAdmin True 29 administrator 6 Product System True 29 administrator 7 Newseditor False 29 administrator 8 GreetingBoard False 27 alan 1 AllUsers True 27 alan 2 JobQueue False 27 alan 3 Telephone Directory Admin False 27 alan 4 Jobqueueadmin False 27 alan 5 UserAdmin False 27 alan 6 Product System False 27 alan 7 Newseditor False 27 alan 8 GreetingBoard False 97 andy 1 AllUsers True 97 andy 2 JobQueue False 97 andy 3 Telephone Directory Admin False 97 andy 4 Jobqueueadmin False 97 andy 5 UserAdmin False 97 andy 6 Product System False 97 andy 7 Newseditor False 97 andy 8 GreetingBoard False 26 Barry 1 AllUsers True 26 Barry 2 JobQueue True 26 Barry 3 Telephone Directory Admin True 26 Barry 4 Jobqueueadmin True 26 Barry 5 UserAdmin True 26 Barry 6 Product System True 26 Barry 7 Newseditor True 26 Barry 8 GreetingBoard True ......................................... So the rules are ALWAYS show ALL Entries for ACCESSLEVELS and where EXISTS in ACCESSRIGHTS produce a true / false to show this. I hope this makes sense and hopefully you dont need the table definitions as everything I need to work with is in the original Query. I just need a way of manipulating it slightly and getting the join in the right place. Thank you. Pace

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >