Search Results

Search found 21434 results on 858 pages for 'query master'.

Page 498/858 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • Selecting data in clustered index order without ORDER BY

    - by kcrumley
    I know there is no guarantee without an ORDER BY clause, but are there any techniques to tune SQL Server tables so they're more likely to return rows in clustered index order, without having to specify ORDER BY every single time I want to run a super-quick ad hoc query? For example, would rebuilding my clustered index or updating statistics help? I'm aware that I can't count on a query like: select * from AuditLog where UserId = 992 to return records in the order of the clustered index, so I would never build code into an application based on this assumption. But for simple ad hoc queries, on almost all of my tables, the data consistently comes out in clustered index order, and I've gotten used to being able to expect the most recent results to be at the bottom. Out of all the many tables we use, I've only noticed two ever giving me results in an unpredicted order. This is really just an annoyance, but it would be nice to be able to minimize it. In case this is relevant because of page boundary issues or something like that, I should mention that one of the tables that has inconsistent ordering, the AuditLog table, is the longest table we have that has a clustered index on an identity column. Also, this database has recently been moved from SQL 2005 to SQL 2008, and we've seen no noticeable change in this behavior.

    Read the article

  • FreeTDS runs out of memory from DBD::Sybase

    - by skiphoppy
    When I add client charset = UTF-8 to my freetds.conf file, my DBD::Sybase program emits: Out of memory! and terminates. This happens when I call execute() on an SQL query statement that returns any ntext fields. I can return numeric data, datetimes, and nvarchars just fine, but whenever one of the output fields is ntext, I get this error. All these queries work perfectly fine without the UTF-8 setting, but I do need to handle some characters that throw warnings under the default character set. (See related question.) The error message is not formatted the same way other DBD::Sybase error messages seem to be formatted. I do get a message that a rollback() is being issued, though. (My false AutoCommit flag is being honored.) I think I read somewhere that FreeTDS uses the iconv program to convert between character sets; is it possible that this message is being emitted from iconv? If I execute the same query with the same freetds.conf settings in tsql (FreeTDS's command-line SQL shell), I don't get the error. I'm connecting to SQL Server. What do I need to do to get these queries to return successfully?

    Read the article

  • Group / User based security. Table / SQL question

    - by Brett
    Hi, I'm setting up a group / user based security system. I have 4 tables as follows: user groups group_user_mappings acl where acl is the mapping between an item_id and either a group or a user. The way I've done the acl table, I have 3 columns of note (actually 4th one as an auto-id, but that is irrelevant) col 1 item_id (item to access) col 3 user_id (user that is allowed to access) col 3 group_id (group that is allowed to access) So for example item1, peter, , item2, , group1 item3, jane, , so either the acl will give access to a user or a group. Any one line in the ACL table with either have an item - user mapping, or an item group. If I want to have a query that returns all objects a user has access to, I think I need to have a SQL query with a UNION, because I need 2 separate queries that join like.. item - acl - group - user AND item - acl - user This I guess will work OK. Is this how its normally done? Am I doing this the right way? Seems a little messy. I was thinking I could get around it by creating a single user group for each person, so I only ever deal with groups in my SQL, but this seems a little messy as well..

    Read the article

  • SSIS Transaction with Sql Transaction

    - by Mike
    I started with a package to make sure Transactions are working correctly. The package level transaction is set to Required. I have two Execute Sql Task, one deletes rows from a table and one does 1/0, to throw the error. Both task are set to supported transaction level and Serializable IsolationLevel. That works. Now when I replace my two sql task to two separate procedure calls, the first one, ChargeInterest, runs successful but the second one, PaymentProcess, fails always saying. [Execute SQL Task] Error: Executing the query "Exec [proc_xx_NotesReceivable_PaymentProcess] ..." failed with the following error: "Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. PaymentProcess being the second stored procedure. Both procedures have there own BEGIN, COMMIT AND ROLLBACKS inside the SP. I believe that the transactions are being successfully handed in the Charge Interest because I can run the following without issues or the dreaded you started with 0 and now have 1 transaction. EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 --OR GO BEGIN TRAN EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 ROLLBACK TRAN Now I have noticed that DTC does get kicked off in both instances? Why I am not sure because it is using the same connection. In the live example I can see the transaction get started but disappears if I put a breakpoint on the PreExecute event of the second stored procedure. What is the correct way to mingle SP transactions with SSIS transactions?

    Read the article

  • How to tell if OpenGL is really working in Ubuntu 10.04

    - by Jonathan
    I have a lenovo S9e running Intel integrated graphics. Here is my lspci output related to the graphics: 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: Lenovo Device 3870 Flags: bus master, fast devsel, latency 0 Memory at f0580000 (32-bit, non-prefetchable) [size=512K] Capabilities: [d0] Power Management version 2 I want to know how I can make sure OpenGL support is running in full on an Ubuntu 10.04 installation. I have a few hints to think that it is not: The "Desktop Effects" will not load Apps such as stardock, when attempting to use OpenGL rendering, will display black boxes instead of transparency In the games Pioneers, the number-tile icons are suspiciously just black circles Windows games running with Wine will only support software rendering, not hardware rendering When I boot into a Knoppix LiveCD, the desktop effects do work, splendidly, meaning compiz detects my computer as capable. My problem with troubleshooting is that Canonical has basically eliminated the conf-file-based mechanism of X11 as far as I can tell, thus making it even harder to ensure graphics modules are loading properly. How do I debug and test OpenGL on m Ubuntu 10.04 installation?

    Read the article

  • Fast, easy, and secure method to perform DB actions with GET

    - by rob - not a robber
    Hey All, Sort of a methods/best practices question here that I am sure has been addressed, yet I can't find a solution based on the vague search terms I enter. I know starting off the question with "Fast and easy" will probably draw out a few sighs, so my apologies. Here is the deal. I have a logged in area where an ADMIN can do a whole host of POST operations to input data relating to their profile. The way I have data structured is pretty distinct and well segmented in most tables as it relates to the ID of the admin. Now, I have a table where I dump one type of data into and differentiate this data by assigning the ADMIN's unique ID to each record. In other words, all ADMINs have this one type of data writing to this table. I just differentiate by the ADMIN ID with each record. I was planning on letting the ADMIN remove these records by clicking on a link with a query string - obviously using GET. Obviously, the query structure is in the link so any logged in admin could then exploit the URL and delete a competitor's records. Is the only way to safely do this through POST or should I pass through the session info that includes password and validate it against the ADMIN ID that is requesting the delete? This is obviously much more work for me. As they said in the auto repair biz I used to work in... there are 3 ways to do a job: Fast, Good, and Cheap. You can only have two at a time. Fast and cheap will not be good. Good and cheap will not have fast turnaround. Fast and good will NOT be cheap. haha I guess that applies here... can never have Fast, Easy and Secure all at once ;) Thanks in advance...

    Read the article

  • Cannot boot from a hard drive

    - by Martin Melka
    I have a problem booting from a hdd. I used to have it as my main drive before I bought an SSD, so I had been able to boot from it. But for some reason, now, half a year later, I can't get it to work. I completely erased it, deleting data and partitioning (using EASEUS Partition Master), then I installed Kubuntu (without changing anything in the installer), but it simply won't boot up. It always boots the drive with Windows and when I unplug this drive, it only gives me an error "PXE-E61: Media test failure, check cable", I guess it's trying to boot from LAN. I tried installing the system on a freshly deleted drive, without any other drives plugged in the pc, but the problems persist. This is how the drives look now (first one has Windows 7 installed, the second one Kubuntu): I am lost. I mean, after doing a fresh wipe and a clean install without altering anything, it should work. But it doesn't. What can be wrong here? Thanks

    Read the article

  • Test tomcat for ssl renegotiation vulnerability

    - by Jim
    How can I test if my server is vulnerable for SSL renegotiation? I tried the following (using OpenSSL 0.9.8j-fips 07 Jan 2009: openssl s_client -connect 10.2.10.54:443 I see it connects, it brings the certificate chain, it shows the server certificate, and last: SSL handshake has read 2275 bytes and written 465 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 50B4839724D2A1E7C515EB056FF4C0E57211B1D35253412053534C4A20202020 Session-ID-ctx: Master-Key: 7BC673D771D05599272E120D66477D44A2AF4CC83490CB3FDDCF62CB3FE67ECD051D6A3E9F143AE7C1BA39D0BF3510D4 Key-Arg : None Start Time: 1354008417 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) What does Secure Renegotiation IS supported mean? That SSL renegotiation is allowed? Then I did but did not get an exception or get the certificate again: verify error:num=20:unable to get local issuer certificate verify return:1 verify error:num=27:certificate not trusted verify return:1 verify error:num=21:unable to verify the first certificate verify return:1 HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/html;charset=ISO-8859-1 Content-Length: 174 Date: Tue, 27 Nov 2012 09:13:14 GMT Connection: close So is the server vulnerable to SSL renegotiation or not?

    Read the article

  • Analyzing Web Application Speed

    - by Amy
    I'm a bit confused because the logical/programmer brain in me says that if all things are constant, the speed of a function must be constant. I am working on a PHP web application with jqGrid as a front end for showing the data. I am testing on my personal computer, so network traffic does not apply. I make an HTTP request to a PHP function, it returns the data, and then jqGrid renders it. What has me befuddled is that sometimes, Firebug reports that this is taking 300-600 milliseconds sometimes, and sometimes, it's taking 3.68 seconds. I can run the request over and over again, with very radically different response times. The query is the same. The number of users on the system is the same. No network latency. Same code. I'm not running other applications on the computer while testing. I could understand query caching improving performance on subsequent requests, but the speed is just fluctuating wildly with no rhyme or reason. So, my question is, what else can cause such variability in the response time? How can I determine what's doing it? More importantly, is there any way to get things more consistent?

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • Using VirtualBox/VMWare to deploy software to multi sites ?

    - by Sim
    Hi all, I'm currently evaluating the feasibility of using VirtualBox (or VMWare) to deploy the follow project to 10 sites Windows XP MSSQL 2005 Express Edition with Advanced Services JBoss to run 1 in-house software that mostly query master data (customers/products) and feed to other software Why I want to do this ? Because the IT staffs in my 10 sites are not capable enough and the steps taken to setup those "in-house project" are also complicated What are the cons I can forsee ? Need extra power to run that virtualbox instance The IT staffs won't be much knowledgeable 'bout how to install the stuffs Cost (license for VirtualBox in commercial environment as well as extra OS license) I really seek your inputs on the pro/con of this approach, or any links that I can read further Thanks a lot

    Read the article

  • How do I get this sql to linq? Multiple groups

    - by Dwight T
    For a db person, LINQ can be frustrating. I need to convert the following SQL into Linq. SELECT COUNT(o.objectiveid), COUNT(distinct r.ReviewId), l.Abbreviation FROM Objective o JOIN Review r on r.ReviewId = o.ReviewId and r.ReviewPeriodId = 3 and r.IsDeleted = 0 JOIN Position p on p.PositionId = r.EmployeePositionId and p.DivisionId = 2 JOIN Location l on l.LocationId = p.LocationId GROUP BY l.Abbreviation The group by nested example might be the way I have to go, but not sure. Doing one group by I have used the following code: var query = from rev in db.Reviews .Where(r => r.IsDeleted == false && r.ReviewPeriodId == reviewPeriodId) from obj in db.Objectives .Where(o => o.ReviewId == rev.ReviewId && o.IsDeleted == false) from pos in db.Positions .Where(p => rev.EmployeePositionId == p.PositionId && p.IsDeleted == false && p.DivisionId == divisionId ) from loc in db.Locations .Where(l => pos.LocationId == l.LocationId) group loc by loc.Abbreviation into locgroup select new ReportResults { KeyId = 0, Description = locgroup.Key, Count = locgroup.Count() }; return query.ToList(); What is the correct way? Thanks

    Read the article

  • Nhibernate - stuck with detached criteria (asp.net mvc 1 with nhibernate 2) c#

    - by Jen
    OK so I can't find a good example of this so I can better understand how to use detached criteria (assuming that's what I want to use in the first place). I have 2 tables. Placement and PlacementSupervisor My PlacementSupervisor table has a FK of PlacementID which relates to Placement.PlacementID - though my nhibernate model class has PlacementSupervisor . Placement (rather than specifically specifying a property of placement ID - not sure if this is important). What I am trying to do is - if values are passed through for the supervisor ID I want to restrict placements with that supervisor id. Have tried: ICriteria query = m_PlacementRepository.QueryAlias("p") .... if (criteria.SupervisorId > 0 && !string.IsNullOrEmpty(criteria.SupervisorTypeId)) { DetachedCriteria entityQuery = DetachedCriteria.For<PlacementSupervisor>("sup") .Add(Restrictions.And( Restrictions.Eq("sup.supervisorId", criteria.SupervisorId), Restrictions.Eq("sup.supervisorTypeId", criteria.SupervisorTypeId) )) .SetProjection(Projections.ProjectionList() .AddPropertyAlias("Placement.PlacementId", "PlacementId") ); query.Add(Subqueries.PropertyIn("p.PlacementId", entityQuery)); } Which just gives me the error: Could not find a matching criteria info provider to: (sup.supervisorId = 5 and sup.supervisorTypeId = U) Firstly supervisorTypeId is a string. Secondly I don't understand how to achieve what I'm trying to do so have just been trying various combinations of projections, and property aliases and subquery options..as I don't get how I'm supposed to join to another table/entity when the FK key sits in the second table. Can someone point me in the right direction. It seems like such an easy thing to do from a data perspective that hopefully I'm just missing something obvious!!

    Read the article

  • Is it possible to order by a composite key with JPA and CriteriaBuilder

    - by Kjir
    I would like to create a query using the JPA CriteriaBuilder and I would like to add an ORDER BY clause. This is my entity: @Entity @Table(name = "brands") public class Brand implements Serializable { public enum OwnModeType { OWNER, LICENCED } @EmbeddedId private IdBrand id; private String code; //bunch of other properties } Embedded class is: @Embeddable public class IdBrand implements Serializable { @ManyToOne private Edition edition; private String name; } And the way I am building my query is like this: CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery<Brand> q = cb.createQuery(Brand.class).distinct(true); Root<Brand> root = q.from(Brand.class); if (f != null) { f.addCriteria(cb, q, root); f.addOrder(cb, q, root, sortCol, ascending); } return em.createQuery(q).getResultList(); And here are the functions called: public void addCriteria(CriteriaBuilder cb, CriteriaQuery<?> q, Root<Brand> r) { } public void addOrder(CriteriaBuilder cb, CriteriaQuery<?> q, Root<Brand> r, String sortCol, boolean ascending) { if (ascending) { q.orderBy(cb.asc(r.get(sortCol))); } else { q.orderBy(cb.desc(r.get(sortCol))); } } If I try to set sortCol to something like "id.edition.number" I get the following error: javax.ejb.EJBException: java.lang.IllegalArgumentException: Unable to resolve attribute [id.name] against path Any idea how I could accomplish that? I tried searching online, but I couldn't find a hint about this... Also would be great if I could do a similar ORDER BY when I have a @ManyToOne relationship (for instance, "id.edition.number")

    Read the article

  • Excel: What formula combines this data into one COUNT amount?

    - by Mike
    I have 30 colleagues who are answering questions over 3 time periods. Each has their own Excel workbook with the questions, and over the year they update it. I collate their worksheets into one master worksheet, but now need to combine their answers into a simple table. The questions, the time periods and then a COUNT of how many answered it. For example: I need a table that shows me how many people (not the persons name at this point) answered question 10 in time period 2. I can't use a database before someone mentions it ;). Many thanks Mike.

    Read the article

  • Mount CIFS share with autofs

    - by Phanto
    I have a system running RHEL 5.5, and I am trying to mount a Windows share on a server using autofs. (Due to the network not being ready upon startup, I do not want to utilize fstab.) I am able to mount the shares manually, but autofs is just not mounting them. Here are the files I am working with: At the end of /etc/auto.master, I have: ## Mount this test share: /test /etc/auto.test --timeout=60 In /etc/auto.test, I have: test -fstype=cifs,username=testuser,domain=domain.com,password=password ://server/test I then restart the autofs service. However, this does not work. ls-ing the directory does not return any results. I have followed all these guides on the web, and I either don't understand them, or they.just.don't.work. Thank You

    Read the article

  • PreparedStatement question in Java against Oracle.

    - by fardon57
    Hi everyone, I'm working on the modification of some code to use preparedStatement instead of normal Statement, for security and performance reason. Our application is currently storing information into an embedded derby database, but we are going to move soon to Oracle. I've found two things that I need your help guys about Oracle and Prepared Statement : 1- I've found this document saying that Oracle doesn't handle bind parameters into IN clauses, so we cannot supply a query like : Select pokemon from pokemonTable where capacity in (?,?,?,?) Is that true ? Is there any workaround ? ... Why ? 2- We have some fields which are of type TIMESTAMP. So with our actual Statement, the query looks like this : Select raichu from pokemonTable where evolution = TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') What should be done for a prepared Statement ? Should I put into the array of parameters : 2500-12-31 or TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') ? Thanks for your help, I hope my questions are clear ! Regards,

    Read the article

  • Dovecot/Postfix-mysql e-mail Aliases are not correctly forwarded

    - by jo_fryli
    I recently set up Dovecot/postfix-mysql on my Debian Squeeze Server and I have a bit of a problem. When ever I send a email to an alias ([email protected] forwarded to [email protected] for example) Postfix (or Dovecot, I'm not quite sure) puts this email into a Mailbox rather than forwarding it to the real Mail-Adress. I have tested all the MySQL queries and they all behave the way I intend them to do. foobar dovecot: deliver([email protected]): msgid=<000001385b464c9a-e40af11e-3bf4-49f6-903d-1d2369f6bfb6-000000@barfoo: saved mail to INBOX master.cf main.cf Keep in mind that normal E-Mail sending and receiving works just fine! I have set up my MySQL Tables with Postfixadmin. Thanks for your help!

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • Adjust static value into dynamic (javascript) value possible in Sharepoint allitems.aspx page?

    - by lerac
    <SharePoint:SPDataSource runat="server" IncludeHidden="true" SelectCommand="&lt;View&gt;&lt;Query&gt;&lt;OrderBy&gt;&lt;FieldRef Name=&quot;EventDate&quot;/&gt;&lt;/OrderBy&gt;&lt;Where&gt;&lt;Contains&gt;&lt;FieldRef Name=&quot;lawyer_x0020_1&quot;/&gt;&lt;Value Type=&quot;Note&quot;&gt;F. Sanches&lt;/Value&gt;&lt;/Contains&gt;&lt;/Where&gt;&lt;/Query&gt;&lt;/View&gt;" id="datasource1" DataSourceMode="List" UseInternalName="true"><InsertParameters><asp:Parameter DefaultValue="{ANUMBER}" Name="ListID"></asp:Parameter> This codeline is just one line of the allitems.aspx of a sharepoint list item. It only displays items where lawyer 1 = F. Sanches. Before I start messing around with the .ASPX page I wonder if it possible to change F. Sanches (in the code) into a dynamical variable (from a javascript value or something else that can be used to place the javascript value in there dynamically). If I put any javascript code in the line it will not work. P.S. Ignore ANUMBER part in code. Let say to make it simple I have javascript variable like this (now static but with my other code it is dynamic). It would be an achievement if it would place a static javascript variable. <SCRIPT type=text/javascript>javaVAR = "P. Janssen";</script> If Yes -- how? If No -- Thank you!

    Read the article

  • How do I programmatically insert rows into a Silverlight DataGrid without binding?

    - by j0rd4n
    I am using a common Silverlight DataGrid to display results from a search. The "schema" of the search can vary from query to query. To accommodate this, I am trying to dynamically populate the DataGrid. I can set explicitly set the columns, but I am having trouble setting the ItemSource. All of the MSDN examples set the ItemSource to a collection with a strong type (e.g. a Custom type with public properties matching the schema). The DataGrid then uses reflection to scour the strong type for public properties that will match the columns. Since my search results are dynamic, I cannot create a strong type to represent what comes back. Can I not just give the DataGrid an arbitrary list of objects so long as the number of objects in each list matches the number of columns? Anyone know if this is possible? I would like to do something similar to this: List<List<object>> myResults = <voodoo that populates the result list> myDataGrid.ItemsSource = myResults;

    Read the article

  • Access 2007 not allowing user to delete record in subform

    - by Todd McDermid
    Good day... The root of my issue is that there's no context menu allowing the user to delete a row from a form. The "delete" button on the ribbon is also disabled. In Access 2003, apparently this function was available, but since our recent "upgrade" to 2007 (file is still in MDB format) it's no longer there. Please keep in mind I'm not an Access dev, nor did I create this app - I inherited support for it. ;) Now for the details, and what I've tried. The form in question is a subform on a larger form. I've tried turning "AllowDeletes" on on both forms. I've checked the toolbar and ribbon properties on the forms to see if they loaded some custom stuff, but no. I've tried changing the "record locks" to "on edit", no joy. I examined the query to see if it was "too complicated" to permit a delete - as far as I can tell, it's a very simple two (linked) table join. Compared to another form in this app that does permit row deletes, it has a much more complicated (multi-join, built on queries) query. Is there a resource that would describe the required conditions for allowing deletes? Thanks in advance...

    Read the article

  • Show users a list of unique items on Java Google App Engine

    - by James
    I've been going round in circles with what must be a very simple challenge but I want to do it the most efficient way from the start. So, I've watched Brett Slatkin's Google IO videos (2008 & 2009) about building scalable apps including http://www.youtube.com/watch?v=AgaL6NGpkB8 and read the docs but as a n00b, I'm still not sure. I'm trying to build an app on GAEJ similar to the original 'hotornot' where a user is presented with an item which they rate. Once they rate it, they are presented with another one which they haven't seen before. My question is this; is it most efficient to do a query up front to grab x items (say 100) and put them in a list (stored in memcache?) or is it better to simply make a query for a new item after each rating. To keep track of the items a user has seen, I'm planning to keep those items' keys in a list property of the user's entity. Does that sound sensible? I've really got myself confused about this so any help would be much appreciated.

    Read the article

  • How do I include a frameset under CGI.pm?

    - by neversaint
    I want to have a cgi-script that does two things. Take the input from a form. Generate results base on the input values on a frame. I also want the frame to exist only after the result is generated/printed. Below is the simplified code of what I want to do. But somehow it doesn't work. What's the right way to do it? #!/usr/local/bin/perl use CGI ':standard'; print header; print start_html('A Simple Example'), h1('A Simple Example'), start_form, "What's your name? ",textfield('name'), p, "What's the combination?", p, checkbox_group(-name=>'words', -values=>['eenie','meenie','minie','moe'], -defaults=>['eenie','minie']), p, "What's your favorite color? ", popup_menu(-name=>'color', -values=>['red','green','blue','chartreuse']), p, submit, end_form, hr; if (param()) { # begin create the frame print <<EOF; <html><head><title>$TITLE</title></head> <frameset rows="10,90"> <frame src="$script_name/query" name="query"> <frame src="$script_name/response" name="response"> </frameset> EOF # Finish creating frame print "Your name is: ",em(param('name')), p, "The keywords are: ",em(join(", ",param('words'))), p, "Your favorite color is: ",em(param('color')), hr; } print end_html;

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >