Search Results

Search found 24915 results on 997 pages for 'ordered test'.

Page 187/997 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Broken Views

    - by Ajarn Mark Caldwell
    “SELECT *” isn’t just hazardous to performance, it can actually return blatantly wrong information. There are a number of blog posts and articles out there that actively discourage the use of the SELECT * FROM …syntax.  The two most common explanations that I have seen are: Performance:  The SELECT * syntax will return every column in the table, but frequently you really only need a few of the columns, and so by using SELECT * your are retrieving large volumes of data that you don’t need, but the system has to process, marshal across tiers, and so on.  It would be much more efficient to only select the specific columns that you need. Future-proof:  If you are taking other shortcuts in your code, along with using SELECT *, you are setting yourself up for trouble down the road when enhancements are made to the system.  For example, if you use SELECT * to return results from a table into a DataTable in .NET, and then reference columns positionally (e.g. myDataRow[5]) you could end up with bad data if someone happens to add a column into position 3 and skewing all the remaining columns’ ordinal position.  Or if you use INSERT…SELECT * then you will likely run into errors when a new column is added to the source table in any position. And if you use SELECT * in the definition of a view, you will run into a variation of the future-proof problem mentioned above.  One of the guys on my team, Mike Byther, ran across this in a project we were doing, but fortunately he caught it while we were still in development.  I asked him to put together a test to prove that this was related to the use of SELECT * and not some other anomaly.  I’ll walk you through the test script so you can see for yourself what happens. We are going to create a table and two views that are based on that table, one of them uses SELECT * and the other explicitly lists the column names.  The script to create these objects is listed below. IF OBJECT_ID('testtab') IS NOT NULL DROP TABLE testtabgoIF OBJECT_ID('testtab_vw') IS NOT NULL DROP VIEW testtab_vwgo IF OBJECT_ID('testtab_vw_named') IS NOT NULL DROP VIEW testtab_vw_namedgo CREATE TABLE testtab (col1 NVARCHAR(5) null, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col2)VALUES ('A','B'), ('A','B')GOCREATE VIEW testtab_vw AS SELECT * FROM testtabGOCREATE VIEW testtab_vw_named AS SELECT col1, col2 FROM testtabgo Now, to prove that the two views currently return equivalent results, select from them. SELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named OK, so far, so good.  Now, what happens if someone makes a change to the definition of the underlying table, and that change results in a new column being inserted between the two existing columns?  (Side note, I normally prefer to append new columns to the end of the table definition, but some people like to keep their columns alphabetized, and for clarity for later people reviewing the schema, it may make sense to group certain columns together.  Whatever the reason, it sometimes happens, and you need to protect yourself and your code from the repercussions.) DROP TABLE testtabgoCREATE TABLE testtab (col1 NVARCHAR(5) null, col3 NVARCHAR(5) NULL, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col3, col2)VALUES ('A','C','B'), ('A','C','B')goSELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named I would have expected that the view using SELECT * in its definition would essentially pass-through the column name and still retrieve the correct data, but that is not what happens.  When you run our two select statements again, you see that the View that is based on SELECT * actually retrieves the data based on the ordinal position of the columns at the time that the view was created.  Sure, one work-around is to recreate the View, but you can’t really count on other developers to know the dependencies you have built-in, and they won’t necessarily recreate the view when they refactor the table. I am sure that there are reasons and justifications for why Views behave this way, but I find it particularly disturbing that you can have code asking for col2, but actually be receiving data from col3.  By the way, for the record, this entire scenario and accompanying test script apply to SQL Server 2008 R2 with Service Pack 1. So, let the developer beware…know what assumptions are in effect around your code, and keep on discouraging people from using SELECT * syntax in anything but the simplest of ad-hoc queries. And of course, let’s clean up after ourselves.  To eliminate the database objects created during this test, run the following commands. DROP TABLE testtabDROP VIEW testtab_vwDROP VIEW testtab_vw_named

    Read the article

  • ?Exadata??????DBFS

    - by Liu Maclean(???)
    ?Exadata???DBFS ??????? 1. ??fuse RPM  [root@dm01db01 ~]# yum install fuse Loaded plugins: rhnplugin, security This system is not registered with ULN. ULN support will be disabled. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package fuse.x86_64 0:2.7.4-8.0.1.el5 set to be updated --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================================================================  Package                            Arch                                 Version                                         Repository                                Size ======================================================================================================================================================================== Installing:  fuse                               x86_64                               2.7.4-8.0.1.el5                                 el5_latest                                85 k Transaction Summary ======================================================================================================================================================================== Install       1 Package(s) Upgrade       0 Package(s) Total download size: 85 k Is this ok [y/N]: y Downloading Packages: fuse-2.7.4-8.0.1.el5.x86_64.rpm                                                                                                                  |  85 kB     00:00      Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction   Installing     : fuse                                                                                                                                             1/1  Installed:   fuse.x86_64 0:2.7.4-8.0.1.el5                                                                                                                                          [root@dm01db01 ~]# yum install fuse-libs Loaded plugins: rhnplugin, security This system is not registered with ULN. ULN support will be disabled. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package fuse-libs.i386 0:2.7.4-8.0.1.el5 set to be updated ---> Package fuse-libs.x86_64 0:2.7.4-8.0.1.el5 set to be updated --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================================================================  Package                                Arch                                Version                                       Repository                               Size ======================================================================================================================================================================== Installing:  fuse-libs                              i386                                2.7.4-8.0.1.el5                               el5_latest                               71 k  fuse-libs                              x86_64                              2.7.4-8.0.1.el5                               el5_latest                               70 k Transaction Summary ======================================================================================================================================================================== Install       2 Package(s) Upgrade       0 Package(s) Total download size: 141 k Is this ok [y/N]: y Downloading Packages: (1/2): fuse-libs-2.7.4-8.0.1.el5.x86_64.rpm                                                                                                      |  70 kB     00:00      (2/2): fuse-libs-2.7.4-8.0.1.el5.i386.rpm                                                                                                        |  71 kB     00:00      ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Total                                                                                                                                    71 kB/s | 141 kB     00:01      Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction   Installing     : fuse-libs                                                                                                                                        1/2    Installing     : fuse-libs                                                                                                                                        2/2  Installed:   fuse-libs.i386 0:2.7.4-8.0.1.el5                                                  fuse-libs.x86_64 0:2.7.4-8.0.1.el5                                                  Complete! [root@dm01db01 ~]# yum install fuse-devel Loaded plugins: rhnplugin, security This system is not registered with ULN. ULN support will be disabled. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package fuse-devel.i386 0:2.7.4-8.0.1.el5 set to be updated ---> Package fuse-devel.x86_64 0:2.7.4-8.0.1.el5 set to be updated --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================================================================  Package                                 Arch                                Version                                      Repository                               Size ======================================================================================================================================================================== Installing:  fuse-devel                              i386                                2.7.4-8.0.1.el5                              el5_latest                               28 k  fuse-devel                              x86_64                              2.7.4-8.0.1.el5                              el5_latest                               28 k Transaction Summary ======================================================================================================================================================================== Install       2 Package(s) Upgrade       0 Package(s) Total download size: 57 k Is this ok [y/N]: y Downloading Packages: (1/2): fuse-devel-2.7.4-8.0.1.el5.x86_64.rpm                                                                                                     |  28 kB     00:00      (2/2): fuse-devel-2.7.4-8.0.1.el5.i386.rpm                                                                                                       |  28 kB     00:00      ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Total                                                                                                                                    21 kB/s |  57 kB     00:02      Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction   Installing     : fuse-devel                                                                                                                                       1/2    Installing     : fuse-devel                                                                                                                                       2/2  Installed:   fuse-devel.i386 0:2.7.4-8.0.1.el5                                                 fuse-devel.x86_64 0:2.7.4-8.0.1.el5                                                 Complete! 2. ?? DBFS??? ?????? cd $ORACLE_HOME/rdbms/admin sqlplus / as sysdba Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> @prvtfspi.plb Package body created. No errors. Package body created. No errors. ?????dbms_dbfs_sfs package  SQL> create tablespace dbfstbs datafile size 20g; Tablespace created. SQL> create user maclean_dbfs identified by oracle; User created. SQL> grant dba to maclean_dbfs; Grant succeeded. @@!!! SQL> grant  dbfs_role to maclean_dbfs; Grant succeeded. 3. ??DBFS SQL> conn maclean_dbfs/oracle Connected. SQL> @?/rdbms/admin/dbfs_create_filesystem.sql  dbfstbs mac_dbfs   No errors. -------- CREATE STORE: begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_MAC_DBFS', tbl_name => 'T_MAC_DBFS', tbl_tbs => 'dbfstbs', lob_tbs => 'dbfstbs', do_partition => false, partition_key => 1, do_compress => false, compression => '', do_dedup => false, do_encrypt => false); end; -------- REGISTER STORE: begin dbms_dbfs_content.registerStore(store_name=> 'FS_MAC_DBFS', provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end; -------- MOUNT STORE: begin dbms_dbfs_content.mountStore(store_name=>'FS_MAC_DBFS', store_mount=>'mac_dbfs'); end; -------- CHMOD STORE: declare m integer; begin m := dbms_fuse.fs_chmod('/mac_dbfs', 16895); end; No errors. 4.  ??mount point  [root@dm01db01 ~]# mkdir /dbfs [root@dm01db01 ~]# chown oracle:oinstall /dbfs 5. ??library path ?OS  # echo "/usr/local/lib" >> /etc/ld.so.conf.d/usr_local_lib.conf 6. ?????? export ORACLE_HOME=/s01/orabase/product/11.2.0/dbhome_1 [root@dm01db01 ~]# ln -s $ORACLE_HOME/lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1 [root@dm01db01 ~]#  ln -s $ORACLE_HOME/lib/libnnz11.so /usr/local/lib/libnnz11.so [root@dm01db01 ~]#  ln -s /lib64/libfuse.so.2 /usr/local/lib/libfuse.so.2 7. ??ldconfig  [root@dm01db01 ~]# ldconfig [root@dm01db01 ~]#  8. ??fusermount??????? [root@dm01db01 ~]#  chmod +x /usr/bin/fusermount [root@dm01db01 ~]#  ls -l /usr/bin/fusermount lrwxrwxrwx 1 root root 15 Sep  7 03:06 /usr/bin/fusermount -> /bin/fusermount [root@dm01db01 ~]#  ls -l /bin/fusermount -rwsr-x--x 1 root fuse 27072 Oct 17  2011 /bin/fusermount 9. ???????OS  dbfs_client maclean_dbfs@dm01db01:1521/orcl  /dbfs 10. ????nohup + &?????mount DBFS,???????????? [oracle@dm01db01 ~]$ echo "oracle"  >> dbfs_pw [oracle@dm01db01 ~]$ nohup dbfs_client maclean_dbfs@dm01db01:1521/orcl /dbfs < dbfs_pw & [oracle@dm01db01 ~]$ df -h Filesystem            Size  Used Avail Use% Mounted on /dev/mapper/VGExaDb-LVDbSys1                        30G   15G   14G  53% / /dev/sda1             502M   30M  447M   7% /boot /dev/mapper/VGExaDb-LVDbOra1                        99G   20G   75G  21% /u01 tmpfs                  81G     0   81G   0% /dev/shm dbfs-maclean_dbfs@orcl:/                        20G  120K   20G   1% /dbfs [oracle@dm01db01 ~]$ mount /dev/mapper/VGExaDb-LVDbSys1 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda1 on /boot type ext3 (rw,nodev) /dev/mapper/VGExaDb-LVDbOra1 on /u01 type ext3 (rw,nodev) tmpfs on /dev/shm type tmpfs (rw,size=82052m) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) dbfs-maclean_dbfs@orcl:/ on /dbfs type fuse (rw,nosuid,nodev,max_read=1048576,default_permissions,user=oracle) [oracle@dm01db01 ~]$ ls -l /dbfs/ total 0 drwxrwxrwx 3 root root 0 Sep 14 05:11 mac_dbfs [oracle@nas ~]$ dbfs_client  --------MOUNT mode: usage: dbfs_client <db_user>@<db_server> [options] <mountpoint>   db_user:              Name of Database user that owns DBFS content repository filesystem(s)   db_server:            A valid connect string for Oracle database server                         (for example, hrdb_host:1521/hrservice)   mountpoint:           Path to mount Database File System(s)                         All the file systems owned by the database user will be seen at the mountpoint. DBFS options:   -o direct_io          Bypass the Linux page cache. Gives much better performance for large files.                         Programs in the file system cannot be executed with this option.                         This option is recommended when DBFS is used as an ETL staging area.   -o wallet             Run dbfs_client in background.                         Wallet must be configured to get credentials.   -o failover           dbfs_client fails over to surviving database instance with no data loss.                         Some performance cost on writes, especially for small files.   -o allow_root         Allows root access to the filesystem.                         This option requires setting 'user_allow_other' parameter in '/etc/fuse.conf'.   -o allow_other        Allows other users access to the file system.                         This option requires setting 'user_allow_other' parameter in '/etc/fuse.conf'.   -o rw                 Mount the filesystem read-write. [Default]   -o ro                 Mount the filesystem read-only. Files cannot be modified.   -o trace_file=STR     Tracing <filename> | 'syslog'   -o trace_level=N      Trace Level: 1->DEBUG, 2->INFO, 3->WARNING, 4->ERROR, 5->CRITICAL [Default: 4]   -h                    help   -V                    version --------COMMAND mode: Usage:     dbfs_client <db_user>@<db_server> --command command [switches] [arguments]             command:          Command to be executed, e.g., ls, cp, mkdir, rm            switches:         Switches are described below for each command.            arguments:        File names or directory names NOTE:      All database pathnames must be absolute and preceded by dbfs:/ Commands   ls            dbfs_client <db_user>@<db_server> --command ls [switches] target      Switches:              -a         Show all files including those starting with '.'            -l         Use a long listing format. In addition to the name of each file                       print the file type, permissions, size, user and group information            -R         List subdirectories recursively cp                     dbfs_client <db_user>@<db_server> --command cp [switches] source destination      Switches:              -r, -R      Copy a directory and its contents recursively into the destination directory rm                     dbfs_client <db_user>@<db_server> --command rm [switches] target      Switches:              -r, -R      Removes a directory and its contents recursively mkdir                  dbfs_client <db_user>@<db_server> --command mkdir directory_name Examples                     dbfs_client ETLUser@DBConnectString --command ls -l -a dbfs:/staging_area/directory1            dbfs_client ETLUser@DBConnectString --command cp -R  /tmp/1-Jan-2009-dump dbfs:/staging_area            dbfs_client ETLUser@DBConnectString --command rm dbfs:/staging_area/hello.txt            dbfs_client ETLUser@DBConnectString --command mkdir dbfs:/staging_area/directory2 [oracle@dm01db01 ~]$ ls -lh /tmp/largefile -rw-r--r-- 1 oracle oinstall 2.0G Sep 14 08:50 /tmp/largefile [oracle@dm01db01 ~]$ time dbfs_client  maclean_dbfs@dm01db01:1521/orcl --command cp /tmp/largefile dbfs:/mac_dbfs Password: /tmp/largefile -> dbfs:/mac_dbfs/largefile real    0m11.802s user    0m0.580s sys     0m2.375s ?Exadata?????2G?????? DBFS???11s => 200MB/s 

    Read the article

  • T-SQL: Compute Subtotals For A Range Of Rows

    - by John Dibling
    MSSQL 2008. I am trying to construct a SQL statement which returns the total of column B for all rows where column A is between 2 known ranges. The range is a sliding window, and should be recomputed as it might be using a loop. Here is an example of what I'm trying to do, much simplified from my actual problem. Suppose I have this data: table: Test Year Sales ----------- ----------- 2000 200 2001 200 2002 200 2003 200 2004 200 2005 200 2006 200 2007 200 2008 200 2009 200 2010 200 2011 200 2012 200 2013 200 2014 200 2015 200 2016 200 2017 200 2018 200 2019 200 I want to construct a query which returns 1 row for every decade in the above table, like this: Desired Results: DecadeEnd TotalSales --------- ---------- 2009 2000 2010 2000 Where the first row is all the sales for the years 2000-2009, the second for years 2010-2019. The DecadeEnd is a sliding window that moves forward by a set ammount for each row in the result set. To illustrate, here is one way I can accomplish this using a loop: declare @startYear int set @startYear = (select top(1) [Year] from Test order by [Year] asc) declare @endYear int set @endYear = (select top(1) [Year] from Test order by [Year] desc) select @startYear, @endYear create table DecadeSummary (DecadeEnd int, TtlSales int) declare @i int -- first decade ends 9 years after the first data point set @i = (@startYear + 9) while @i <= @endYear begin declare @ttlSalesThisDecade int set @ttlSalesThisDecade = (select SUM(Sales) from Test where(Year <= @i and Year >= (@i-9))) insert into DecadeSummary values(@i, @ttlSalesThisDecade) set @i = (@i + 9) end select * from DecadeSummary This returns the data I want: DecadeEnd TtlSales ----------- ----------- 2009 2000 2018 2000 But it is very inefficient. How can I construct such a query?

    Read the article

  • Moq Testing a private method .Many posts but still cannot make one example work

    - by devnet247
    Hi, I have seen many posts and questions about "Mocking a private method" but still cannot make it work and not found a real answer. Lets forget the code smell and you should not do it etc.... From what I understand I have done the following: 1) Created a class Library "MyMoqSamples" 2) Added a ref to Moq and NUnit 3) Edited the AssemblyInfo file and added [assembly: InternalsVisibleTo("DynamicProxyGenAssembly2")] [assembly: InternalsVisibleTo("MyMoqSamples")] 4) Now need to test a private method.Since it's a private method it's not part of an interface. 5) added the following code [TestFixture] public class Can_test_my_private_method { [Test] public void Should_be_able_to_test_my_private_method() { //TODO how do I test my DoSomthing method } } public class CustomerInfo { public string Name { get; set; } public string Surname { get; set; } } public interface ICustomerService { List<CustomerInfo> GetCustomers(); } public class CustomerService: ICustomerService { public List<CustomerInfo> GetCustomers() { return new List<CustomerInfo> {new CustomerInfo {Surname = "Bloggs", Name = "Jo"}}; } protected virtual void DoSomething() { } } Could you provide me an example on how you would test my private method? Thanks a lot

    Read the article

  • jQuery FullCalendar JSON date issue

    - by durilai
    I am integrating jQuery plugin FullCalendar, overall it has been really straightforward. I however have ran into a problem with adding events to the calendar. I am using ASP.NET MVC 1.0 and have found and followed this post. I am returning JSON to the FullCalendar and the events are getting bound, but they all show up as all day events. I am formatting the dates as ISO8601 format. Calendar Javascript $('#calendar').fullCalendar({ events: "/Calendar/GetEvents/" }); JsonResult public JsonResult GetEvents(double start, double end) { var fromDate = Utility.Dates.ConvertFromUnixTimestamp(start); var toDate = Utility.Dates.ConvertFromUnixTimestamp(end); List<GenericEventList> events = GETGENERICLISTOFEVENTS(); return Json(events.ToArray()); } JSON Result Value [{"id":2,"title":"Test Event","start":"2010-03-14T11:00:00","end":"2010-03-14T16:00:00"}, {"id":3,"title":"Test Event1asasas","start":"2010-03-14T10:00:00","end":"2010-03-14T14:00:00"}, {"id":4,"title":"Test Event12","start":"2010-03-14T16:00:00","end":"2010-03-14T17:00:00"}, {"id":6,"title":"Test Event1aaa","start":"2010-03-14T10:00:00","end":"2010-03-14T14:00:00"}] Any help is truly appreciated!

    Read the article

  • apache htaccess rewrite rules make redirection loop

    - by Ali
    Hi guys, Have a very strange problem with Apache .htaccess URL Rewriting and Redirection. Here's my setup: I have a zend application with a single point of entry (index.php) directly under my apache document root (call this the "public" folder). I also have all other public files (images, js, css, etc.) under the public folder. Here, I also have a wordpress blog under the "blog" folder. There's an empty test folder too The Problem When I go to mydomain.com/blog, I get redirected to http://www.theredpin.com/blog (correctly), then to http://www.theredpin.com/blog/ (just with an extra / at end), finally to http://theredpin.com/blog/ -- and we're back where we started. The loop continues. What I don't understand is why would wordpress try to remove the www? I'm guessing it's wordpress because my empty test folder acts just fine! PLEASE HELP. I"M REALLY DESPERATE :( Thank you very much Other things that happen: When I go to mydomain.com, i correctly get redirected to www.mydomain.com When I go to www.mydomain.com, i correctly stay where I am When I go to www.mydomain.com/test or mydomain.com/test, behaviour is correct. Setup So my .htaccess file does the following: If there's no www., then add it and do a 301 redirect. Here's the code I use RewriteCond %{HTTP_HOST} ^mydomain.com [NC] RewriteRule ^(.*)$ http://www.mydomain.com/$1 [L,R=301] If the request is NOT for a resource (image, etc.), or the blog, then load zend application by rewriting to index.php RewriteRule !((^blog(/)?.*$)|(.(js|ico|gif|jpg|jpeg|png|css|cur|JPG|html|txt))$) index.php Thanks again for all your help!!! Ali

    Read the article

  • Negative ItemCount in SharePoint Document Library

    - by ccomet
    What can be done about negative numbers in library item counts? ItemCount is a read-only property, what are you supposed to do when it is drastically incorrect? Earlier last week, I was doing some testing involving the copying and moving of files and folders from one document library to another. I was transfering the items from our actual document library to a sandbox "Test" library that I used to run all sorts of object model and workflow testing in before migrating to the public lists and libraries. I noticed that with files, things worked correctly, but when I copied a folder that had a file inside it (using SPFolder.CopyTo()), the item count for the test library did not actually update. Since this testing was mostly playing around, I paid it little heed. Today I was back in the test library to test a different workflow (regarding PDF conversion). While I was there, I decided to delete the folder I left last week since I didn't need it anymore. And that's when I saw the item count for the list drop to -1 in the All Site Content View. When I deleted the new PDF I had just uploaded, it then dropped to -2! I even checked with the object model... getting an instance of the library I checked the ItemCount property... lo and behold it was also -2. Is there any process which runs in the background, kinda like the one that cleans up workflow history, which will correct this kind of issue? Or is a programmer expected to keep watch for this kind of situation and come up with calculations to compensate the "count penalty", as it were?

    Read the article

  • Calling C# object method from IronPython

    - by Jason
    I'm trying to embed a scripting engine in my game. Since I'm writing it in C#, I figured IronPython would be a great fit, but the examples I've been able to find all focus on calling IronPython methods in C# instead of C# methods in IronPython scripts. To complicate things, I'm using Visual Studio 2010 RC1 on Windows 7 64 bit. IronRuby works like I expect it would, but I'm not very familiar with Ruby or Python syntax. What I'm doing: ScriptEngine engine = Python.CreateEngine(); ScriptScope scope = engine.CreateScope(); //Test class with a method that prints to the screen. scope.SetVariable("test", this); ScriptSource source = engine.CreateScriptSourceFromString("test.SayHello()", Microsoft.Scripting.SourceCodeKind.Statements); source.Execute(scope); This generates an error, "'TestClass' object has no attribute 'SayHello'" This exact set up works fine with IronRuby though using "self.test.SayHello()" I'm wary using IronRuby though because it doesn't appear as mature as IronPython. If it's close enough, I might go with that though. Any ideas? I know this has to be something simple.

    Read the article

  • Assert.AreEqual() Exception in VS2010

    - by Tom Miller
    I am fairly new to unit testing and am using VS2010 to develop in and run my tests. I have a simple test, illustrated below, that simply compares 2 System.Data.DataTableReader objects. I know that they are equal as they are both created using the same object types, the same input file and I have verified that the objects "look" the same. I realize I may be dealing with a couple of issues, one being whether or not this is the proper use of Assert.AreEqual or even the proper way to test this scenario, and the other being the main issue I am dealing with which is why this test fails with this exception: Failed 00:00:00.1000660 0 Assert.AreEqual failed. Expected:<System.Data.DataTableReader>. Actual:<System.Data.DataTableReader>. Here is the unit test code that is failing: public void EntriesTest() { AuditLog target = new AuditLog(); target.Init(); DataSet ds = new DataSet(); ds.ReadXml(TestContext.DataRow["AuditLogPath"].ToString()); DataTableReader expected = ds.Tables[0].CreateDataReader(); DataTableReader actual = target.Entries.Tables[0].CreateDataReader(); Assert.AreEqual<DataTableReader>(expected, actual); } Any help would be greatly appreciated!

    Read the article

  • Object Moved error while consuming a webservice

    - by NandaGopal
    Hi - I've a quick question and request you all to respond soon. I've developed a web service with Form based authentication as below. 1.An entry in web.config as below. 2.In Login Page user is validate on button click event as follows. if (txtUserName.Text == "test" && txtPassword.Text == "test") { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, // Ticket version txtUserName.Text,// Username to be associated with this ticket DateTime.Now, // Date/time ticket was issued DateTime.Now.AddMinutes(50), // Date and time the cookie will expire false, // if user has chcked rememebr me then create persistent cookie "", // store the user data, in this case roles of the user FormsAuthentication.FormsCookiePath); // Cookie path specified in the web.config file in <Forms> tag if any. string hashCookies = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, hashCookies); // Hashed ticket Response.Cookies.Add(cookie); string returnUrl = Request.QueryString["ReturnUrl"]; if (returnUrl == null) returnUrl = "~/Default.aspx"; Response.Redirect(returnUrl); } 3.Webservice has a default webmethod. [WebMethod] public string HelloWorld() { return "Hello World"; } 4.From a webApplication I am making a call to webservice by creating proxy after adding the webreferance of the above webservice. localhost.Service1 service = new localhost.Service1(); service.AllowAutoRedirect = false; NetworkCredential credentials = new NetworkCredential("test", "test"); service.Credentials = credentials; string hello = service.HelloWorld(); Response.Write(hello); and here while consuming it in a web application the below exception is thrown from webservice proxy. -- Object moved Object moved to here. --. Could you please share any thoughts to fix it?

    Read the article

  • How to use Application Verifier to find memory leaks

    - by Patrick
    I want to find memory leaks in my application using standard utilities. Previously I used my own memory allocator, but other people (yes, you Neil) suggested to use Microsoft's Application Verifier, but I can't seem to get it to report my leaks. I have the following simple application: #include <iostream> #include <conio.h> class X { public: X::X() : m_value(123) {} private: int m_value; }; void main() { X *p1 = 0; X *p2 = 0; X *p3 = 0; p1 = new X(); p2 = new X(); p3 = new X(); delete p1; delete p3; } This test clearly contains a memory leak: p2 is new'd but not deleted. I build the executable using the following command lines: cl /c /EHsc /Zi /Od /MDd test.cpp link /debug test.obj I downloaded Application Verifier (4.0.0665) and enabled all checks. If I now run my test application I can see a log of it in Application Verifier, but I don't see the memory leak. Questions: Why doesn't Application Verifier report a leak? Or isn't Application Verifier really intended to find leaks? If it isn't which other tools are available to clearly report leaks at the end of the application (i.e. not by taking regular snapshots and comparing them since this is not possible in an application taking 1GB or more), including the call stack of the place of allocation (so not the simple leak reporting at the end of the CRT) If I don't find a decent utility, I still have to rely on my own memory manager (which does it perfectly).

    Read the article

  • Installing The ruby-gmail rubygem on Mac OS Snow Leopard

    - by johnnygoodman
    I'm working off these instructions: http://github.com/dcparker/ruby-gmail From the home directory I do a standard install and good stuff happens: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ sudo gem install ruby-gmail Successfully installed ruby-gmail-0.2.1 1 gem installed Installing ri documentation for ruby-gmail-0.2.1... Installing RDoc documentation for ruby-gmail-0.2.1... I head over to my ~/www dir where I run scripts that include other rubygems successfully and create a gmail directory. I create a script that includes rubygems and gmail, but does nothing else: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ pwd /Users/johnnygoodman/www/gmail Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ ls test-send.rb Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ cat test-send.rb require 'rubygems' require 'gmail' I run this script and the errors begin: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ ruby test-send.rb /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- mime/message (LoadError) from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/ruby-gmail-0.2.1/lib/gmail/message.rb:1 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/ruby-gmail-0.2.1/lib/gmail.rb:168 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `require' from test-send.rb:2 Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ Here's my gem env: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ gem environment RubyGems Environment: - RUBYGEMS VERSION: 1.3.7 - RUBY VERSION: 1.8.7 (2009-06-08 patchlevel 173) [universal-darwin10.0] - INSTALLATION DIRECTORY: /Library/Ruby/Gems/1.8 - RUBY EXECUTABLE: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - universal-darwin-10 - GEM PATHS: - /Library/Ruby/Gems/1.8 - /Users/johnnygoodman/.gem/ruby/1.8 - /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com The path that the errors give when I run the script is not the same as the GEM PATHS given in the env output. However, I don't know how to make them match or if that's the significant thing here.

    Read the article

  • Linq-to-Sql IIS7 Login failed for user ‘DOMAIN\MACHINENAME$’

    - by cfdev9
    I am encountering unexpected behaviour using Linq-to-sql DataContext. When I run my application locally it works as expected however after deploying to a test server which runs IIS7, I get an error Login failed for user ‘DOMAIN\MACHINENAME$’ when attempting to open objects from the DataContext. This code explains the error, which breaks on the very last line with the error "System.Data.SqlClient.SqlException: Login failed for user". var connStr ="Data Source=server;Initial Catalog=Test;User Id=testuser;Password=password"; //Test 1 var conn1 = new System.Data.SqlClient.SqlConnection(connStr); var cmdString = "SELECT COUNT(*) FROM Table1"; var cmd = new System.Data.SqlClient.SqlCommand(cmdString, conn1); conn1.Open(); var count1 = cmd.ExecuteScalar(); conn1.Close(); //Test 2 var conn2 = new System.Data.SqlClient.SqlConnection(connStr); var context = new TestDataContext(conn2); var count2 = context.Table1s.Count(); The connection string is not even using integrated security, so why is Linq-to-sql trying to connect as a specific user? If I change the server name in the connection string I get a different error so its using atleast part of the connection string, but apparently ignoring the UserId and Password. Very confused.

    Read the article

  • How to mix C++ Qt objects and Qt Jambi objects

    - by Dan
    Hi, I'm trying to combine some existing Qt code written in C++ with some code written in Java using Qt Jambi, but I'm not quite sure how to do it. I'm basically trying to acieve two things: Pass a QObject from C++ to Java using JNI Pass a Qt Jambi QObject from Java to C++ It looks like I can pass the pointer directly and then wrap it in QNativePointer on the Java side, but I can't figure out how to turn a QNativePointer back into the original object, wrapped by Qt Jambi. Eg: I can pass a QWidget* as a long to Java and then create a QNativePointer in Java, but how can I then construct a QWidget out of this? QJambiObject and QObject dont seem to have a "setNativePointer" method and I'm not sure how to convert it. In C++: QWidget* widget = ... jclass cls = env->FindClass("Test"); jmethodID mid = env->GetStaticMethodID(cls, "test", "(I)V"); env->CallStaticVoidMethod(cls, mid, int(widget)); In Java: public class Test { public static void test (int ptr) { QNativePointer pointer = new QNativePointer(QNativePointer.Type.Int); pointer.setIntValue(ptr); QWidget widget = ... Thanks!

    Read the article

  • How to require fullscreen mode in a jQTouch application?

    - by Christopher Young
    I'm using jQTouch to develop a version of a website optimized for safari on the iphone. The jQTouch demo helpfully shows how to show an "install this" message for users not using full screen mode and hide it for those who are. When in fullscreen mode, the body should have the class "fullscreen." So you can hide the "install this" message for people who have already added your app to their home page by adding this css rule to your stylesheet: body.fullscreen #home .info { display: none; } What I'd like to do is require users to use the app in fullscreen mode only. When viewed from the regular browser, they should only see a message asking them to install the app. That message should of course be hidden otherwise. This ought to be really, really easy, so I must just be missing something obvious. I thought one way to do this would be to simply test for the class "fullscreen" on the body: if it's not there, use goTo to get to another div, or hide the other divs, or something like that. Strangely, however, this doesn't work. As a test, I've still got the original "info" message, as in the jQTouch demo, and it doesn't show up when I launch in fullscreen mode. So the body must have the fullscreen class. And yet I can't find any other trace of it: when I put this alert to test things after the document has loaded, I get nothing when launching in fullscreen mode: alert($("body").attr("class")); I also thought I might test for fullscreen mode by checking for the value of the fullScreen boolean. But this doesn't seem to work either. What am I missing? What is the best way to do this?

    Read the article

  • phpexcel write excel and save ?

    - by boss
    code: <?php /** PHPExcel */ require_once '../Classes/PHPExcel.php'; /** PHPExcel_IOFactory */ require_once '../Classes/PHPExcel/IOFactory.php'; // Create new PHPExcel object $objPHPExcel = new PHPExcel(); // Set properties $objPHPExcel->getProperties()->setCreator("Maarten Balliauw") ->setLastModifiedBy("Maarten Balliauw") ->setTitle("Office 2007 XLSX Test Document") ->setSubject("Office 2007 XLSX Test Document") ->setDescription("Test document for Office 2007 XLSX, generated using PHP classes.") ->setKeywords("office 2007 openxml php") ->setCategory("Test result file"); $result = 'select * from table1'; for($i=0;$i<count($result);$i++){ $result1 = 'select * from table2 where table1_id = ' . $result[$i]['table1_id']; for ($j=0;$j<count($result1);$j++) { $objPHPExcel->setActiveSheetIndex(0)->setCellValue('A' . $j, $result1[$j]['name']); } // Set active sheet index to the first sheet, so Excel opens this as the first sheet $objPHPExcel->setActiveSheetIndex(0); // Save Excel 2007 file $objWriter = PHPExcel_IOFactory::createWriter($objPHPExcel, 'Excel2007'); $objWriter->save(str_replace('.php', '.xlsx', __FILE__)); // Echo done echo date('H:i:s') . " Done writing file.\r\n"; } ?> The above code executes and save n no of .xlsx files in the folder, but the problem im getting is biggest count(result1) in the for loop executing in all saved excel files. Please give me the solution.

    Read the article

  • Code to connect to remote server using perl

    - by user304852
    I'm written small code to connect to remote server using perl but observing error messages #!/usr/bin/perl -w use Net::Telnet; $telnet = new Net::Telnet ( Timeout=>60, Errmode=>'die'); $telnet->open('192.168.50.40'); $telnet->waitfor('/login:/'); $telnet->print('queen'); $telnet->waitfor('/password:/'); $telnet->print('kinG!'); $telnet->waitfor('/:/'); $telnet->print('vol >> C:\result.txt'); $telnet->waitfor('/:/'); $telnet->cmd("mkdir vol"); $telnet->print('mkdir vol234'); $telnet->cmd("mkdir vol1"); $telnet->waitfor('/\$ $/i'); $telnet->print('whoamI'); print $output; But while running i'm getting following errors C:\>perl -c E:\test\net.pl E:\test\net.pl syntax OK C:\>perl E:\test\net.pl command timed-out at E:\test\net.pl line 13 C:\> Help me in this regard.. i'm not much aware of perl

    Read the article

  • Passing an array argument from Excel VBA to a WCF service

    - by PrgTrdr
    I'm trying to pass an array as an argument to my WCF service. To test this using Damian's sample code, I modified GetData it to try to pass an array of ints instead of a single int as an argument: using System; using System.ServiceModel; namespace WcfService1 { [ServiceContract] public interface IService1 { [OperationContract] string GetData(int[] value); [OperationContract] object[] GetSomeObjects(); } } using System; namespace WcfService1 { public class Service1 : IService1 { public string GetData(int[] value) { return string.Format("You entered: {0}", value[0]); } public object[] GetSomeObjects() { return new object[] { "String", 123, 44.55, DateTime.Now }; } } } Excel VBA code: Dim addr As String addr = "service:mexAddress=""net.tcp://localhost:7891/Test/WcfService1/Service1/Mex""," addr = addr + "address=""net.tcp://localhost:7891/Test/WcfService1/Service1/""," addr = addr + "contract=""IService1"", contractNamespace=""http://tempuri.org/""," addr = addr + "binding=""NetTcpBinding_IService1"", bindingNamespace=""http://tempuri.org/""" Dim service1 As Object Set service1 = GetObject(addr) Dim Sectors( 0 to 2 ) as Integer Sectors(0) = 10 MsgBox service1.GetData(Sectors) This code works fine with the WCF Test Client, but when I try to use it from Excel, I have this problem. When Excel gets to the service1.GetData call, it reports the following error: Run-time error '-2147467261 (80004003)' Automation error Invalid Pointer It looks like there is some incompatibility between the interface specification and the VBA call. Have you ever tried to pass an array from VBA into WCF? Am I doing something wrong or is this not supported using the Service moniker?

    Read the article

  • Webforms MVP Passive View - event handling

    - by ss2k
    Should the view have nothing event specific in its interface and call the presenter plain methods to handle events and not have any official EventHandlers? For instance // ASPX protected void OnSaveButtonClicked(object sender, EventArgs e) { _Presenter.OnSave(); } Or should the view have event EventHandlers defined in its interface and link those up explicitly to control events on the page // View public interface IView { ... event EventHandler Saved; ... } // ASPX Page implementing the view protected override void OnInit(EventArgs e) { base.OnInit(e); SaveButton.Click += delegate { Saved(this, e); }; } // Presenter internal Presenter(IView view,IRepository repository) { _view = view; _repository = repository; view.Saved += Save; } The second seems like a whole lot of plumbing code to add all over. My intention is to understand the benefits of each style and not just a blanket answer of which to use. My main goals is clarity and high value testability. Testability overall is important, but I wouldn't sacrifice design simplicity and clarity to be able to add another type of test that doesn't lead to too much gain over the test cases already possible with a simpler design. If a design choice does off more testability please include an example (pseudo code is fine) of the type of test it can now offer so I can make my decision if I value that type of extra test enough. Thanks!

    Read the article

  • Default Transaction Timeout

    - by MattH
    I used to set Transaction timeouts by using TransactionOptions.Timeout, but have decided for ease of maintenance to use the config approach: <system.transactions> <defaultSettings timeout="00:01:00" /> </system.transactions> Of course, after putting this in, I wanted to test it was working, so reduced the timeout to 5 seconds, then ran a test that lasted longer than this - but the transaction does not appear to abort! If I adjust the test to set TransactionOptions.Timeout to 5 seconds, the test works as expected After Investigating I think the problem appears to relate to TransactionOptions.Timeout, even though I'm no longer using it. I still need to use TransactionOptions so I can set IsolationLevel, but I no longer set the Timeout value, if I look at this object after I create it, the timeout value is 00:00:00, which equates to infinity. Does this mean my value set in the config file is being ignored? To summarise: Is it impossible to mix the config setting, and use of TransactionOptions If not, is there any way to extract the config setting at runtime, and use this to set the Timeout property [Edit] OR Set the default isolation-level without using TransactionOptions

    Read the article

  • Output asp.net masterpage webform to html email

    - by Steve McCall
    Hi I'm having a bit of a nightmare here! I'mv trying output a webform to html using page.rendercontrol and htmltextwriter but it is resulting in a blank email. Code: StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); HtmlTextWriter htmlTW = new HtmlTextWriter(sw); page.RenderControl(htmlTW); String content = sb.ToString(); MailMessage mail = new MailMessage(); mail.From = new MailAddress("[email protected]"); mail.To.Add("[email protected]"); mail.IsBodyHtml = true; mail.Subject = "Test"; mail.Body = content; SmtpClient smtp = new SmtpClient("1111"); smtp.Send(mail); Response.Write("Message Sent"); I also tried it by rendering a single textbox and got and error saying It needed to be within form tags (which are on the masterpage). I tried this fix: http://forums.asp.net/p/1016960/1368933.aspx#1368933 and added: public override void VerifyRenderingInServerForm(Control control) { return; } But now the errror I get is: VerifyRenderingInServerForm(Control)': no suitable method found to override Does anyone have a fix for this? I'm tearing my hair out!! Thanks, Steve

    Read the article

  • EclEmma JAVA Code coverage - Unable to coverage service layer of RESTful Webservice

    - by Radhika
    I am using EMMA eclipse plugin to generate code coverage reports. My application is a RESTFul webservice. Junits are written such that a client is created for the webservice and invoked with various inputs. However EMMA shows 0% coverage for the source folder. The test folder alone is covered. The application server(jetty server) is started using a main method. Report: Element Coverage Covered Instructions Total Instructions MyRestFulService 13.6% 900 11846 src 0.5% 49 10412 test 98% 1021 1434 Junit Test method: @Test public final void testAddFlow() throws Exception { Client c = Client.create(); WebResource webResource = c.resource(BASE_URI); // Sample files for Add String xhtmlDocument = null; Iterator iter = mapOfAddFiles.entrySet().iterator(); while (iter.hasNext()) { Map.Entry pairs = (Map.Entry) iter.next(); try { document = helper.readFile(requestPath + pairs.getKey()); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } /* POST */ MultiPart multiPart = new MultiPart(); multiPart.bodyPart(.... ........... ClientResponse response = webResource.path("/add").type( MEDIATYPE_MULTIPART_MIXED).post(ClientResponse.class, multiPart); assertEquals("TESTING ADD FOR >>>>>>> " + pairs.getKey(), Status.OK, response.getClientResponseStatus()); } } } Invoked service method: @POST @Path("add") @Consumes("multipart/mixed") public Response add(MultiPart multiPart) throws Exception { Status status = null; List<BodyPart> bodyParts = null; bodyParts = multiPart.getBodyParts(); status = //call to business layer return Response.ok(status).build(); }

    Read the article

  • Unit testing Monorail's RenderText method

    - by MikeWyatt
    I'm doing some maintenance on an older web application written in Monorail v1.0.3. I want to unit test an action that uses RenderText(). How do I extract the content in my test? Reading from controller.Response.OutputStream doesn't work, since the response stream is either not setup properly in PrepareController(), or is closed in RenderText(). Example Action public DeleteFoo( int id ) { var success= false; var foo = Service.Get<Foo>( id ); if( foo != null && CurrentUser.IsInRole( "CanDeleteFoo" ) ) { Service.Delete<Foo>( id ); success = true; } CancelView(); RenderText( "{ success: " + success + " }" ); } Example Test (using Moq) [Test] public void DeleteFoo() { var controller = new FooController (); PrepareController ( controller ); var foo = new Foo { Id = 123 }; var mockService = new Mock < Service > (); mockService.Setup ( s => s.Get<Foo> ( foo.Id ) ).Returns ( foo ); controller.Service = mockService.Object; controller.DeleteTicket ( foo.Id ); mockService.Verify ( s => s.Delete<Foo> ( foo.Id ) ); Assert.AreEqual ( "{success:true}", GetResponse ( Response ) ); } // response.OutputStream.Seek throws an "System.ObjectDisposedException: Cannot access a closed Stream." exception private static string GetResponse( IResponse response ) { response.OutputStream.Seek ( 0, SeekOrigin.Begin ); var buffer = new byte[response.OutputStream.Length]; response.OutputStream.Read ( buffer, 0, buffer.Length ); return Encoding.ASCII.GetString ( buffer ); }

    Read the article

  • Could not load type from assembly error

    - by George Mauer
    I have written the following simple test in trying to learn Castle Windsor's Fluent Interface: using NUnit.Framework; using Castle.Windsor; using System.Collections; using Castle.MicroKernel.Registration; namespace WindsorSample { public class MyComponent : IMyComponent { public MyComponent(int start_at) { this.Value = start_at; } public int Value { get; private set; } } public interface IMyComponent { int Value { get; } } [TestFixture] public class ConcreteImplFixture { [Test] public void ResolvingConcreteImplShouldInitialiseValue() { IWindsorContainer container = new WindsorContainer(); container.Register(Component.For<IMyComponent>().ImplementedBy<MyComponent>().Parameters(Parameter.ForKey("start_at").Eq("1"))); IMyComponent resolvedComp = container.Resolve<IMyComponent>(); Assert.AreEqual(resolvedComp.Value, 1); } } } When I execute the test through TestDriven.NET I get the following error: System.TypeLoadException : Could not load type 'Castle.MicroKernel.Registration.IRegistration' from assembly 'Castle.MicroKernel, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc'. at WindsorSample.ConcreteImplFixture.ResolvingConcreteImplShouldInitialiseValue() When I execute the test through the NUnit GUI I get: WindsorSample.ConcreteImplFixture.ResolvingConcreteImplShouldInitialiseValue: System.IO.FileNotFoundException : Could not load file or assembly 'Castle.Windsor, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc' or one of its dependencies. The system cannot find the file specified. If I open the Assembly that I am referencing in Reflector I can see its information is: Castle.MicroKernel, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc and that it definitely contains Castle.MicroKernel.Registration.IRegistration What could be going on? I should mention that the binaries are taken from the latest build of Castle though I have never worked with nant so I didn't bother re-compiling from source and just took the files in the bin directory. I should also point out that my project compiles with no problem.

    Read the article

  • datagrid filter in c# using sql server

    - by malou17
    How to filter data in datagrid for example if u select the combo box in student number then input 1001 in the text field...all records in 1001 will appear in datagrid.....we are using sql server private void button2_Click(object sender, EventArgs e) { if (cbofilter.SelectedIndex == 0) { string sql; SqlConnection conn = new SqlConnection(); conn.ConnectionString = "Server= " + Environment.MachineName.ToString() + @"\; Initial Catalog=TEST;Integrated Security = true"; SqlDataAdapter da = new SqlDataAdapter(); DataSet ds1 = new DataSet(); ds1 = DBConn.getStudentDetails("sp_RetrieveSTUDNO"); sql = "Select * from Test where STUDNO like '" + txtvalue.Text + "'"; SqlCommand cmd = new SqlCommand(sql, conn); cmd.CommandType = CommandType.Text; da.SelectCommand = cmd; da.Fill(ds1); dbgStudentDetails.DataSource = ds1; dbgStudentDetails.DataMember = ds1.Tables[0].TableName; dbgStudentDetails.Refresh(); } else if (cbofilter.SelectedIndex == 1) { //string sql; //SqlConnection conn = new SqlConnection(); //conn.ConnectionString = "Server= " + Environment.MachineName.ToString() + @"\; Initial Catalog=TEST;Integrated Security = true"; //SqlDataAdapter da = new SqlDataAdapter(); //DataSet ds1 = new DataSet(); //ds1 = DBConn.getStudentDetails("sp_RetrieveSTUDNO"); //sql = "Select * from Test where Name like '" + txtvalue.Text + "'"; //SqlCommand cmd = new SqlCommand(sql,conn); //cmd.CommandType = CommandType.Text; //da.SelectCommand = cmd; //da.Fill(ds1); // dbgStudentDetails.DataSource = ds1; //dbgStudentDetails.DataMember = ds1.Tables[0].TableName; //ds.Tables[0].DefaultView.RowFilter = "Studno = + txtvalue.text + "; dbgStudentDetails.DataSource = ds.Tables[0]; dbgStudentDetails.Refresh(); } }

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >