Search Results

Search found 40393 results on 1616 pages for 'single table inheritance'.

Page 97/1616 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Logging on to two sites simultaneously

    - by James Wakefield
    I want to log on to two sites simultaneously to enable a single sign on solution. We have a smallish wiki that is created with Apple wiki and we have an intranet site on a aspx cms system by Elcom. Both use Active Directory for credentials. Currently they are on different domains, but we could enable a rewrite using our load-balancer (Citrix Netscaler) or IIS. These sites are on different servers, one a mysterious Mac system and the other an IIS v6.0 on windows 2003. Now I am almost certain that a reverse proxy set up will solve this but I really just need someone to agree that this solves this issue, and if there are things I should look out for what they might be. I just want to have an invisible log on screen in an iframe and enter clone the user name and password using javascript.

    Read the article

  • Can I use a single pointer for my hash table in C?

    - by aks
    I want to implement a hash table in the following manner: struct list { char *string; struct list *next; }; struct hash_table { int size; /* the size of the table */ struct list **table; /* the table elements */ }; Instead of struct hash_table like above, can I use: struct hash_table { int size; /* the size of the table */ struct list *table; /* the table elements */ }; That is, can I just use a single pointer instead of a double pointer for the hash table elements? If yes, please explain the difference in the way the elements will be stored in the table?

    Read the article

  • Error Installing ruby with RVM Single User mode on Arch Linux

    - by ChrisBurnor
    I've just installed RVM on ArchLinux x64 in single user mode via the recommended install script curl -L https://get.rvm.io | bash -s stable I've also installed all the requirements listed in rvm requirements However, I'm having trouble actually installing any version of ruby. And getting the following error: arch:~ % rvm install 1.9.3 No binary rubies available for: ///ruby-1.9.3-p194. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Fetching yaml-0.1.4.tar.gz to /home/christopher/.rvm/archives % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 460k 100 460k 0 0 702k 0 --:--:-- --:--:-- --:--:-- 767k Extracting yaml-0.1.4.tar.gz to /home/christopher/.rvm/src Prepare yaml in /home/christopher/.rvm/src/yaml-0.1.4. Configuring yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running ' ./configure --prefix=/home/christopher/.rvm/usr ', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/configure.log Compiling yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/make.log Please note that it's required to reinstall all rubies: rvm reinstall all --force Installing Ruby from source to: /home/christopher/.rvm/rubies/ruby-1.9.3-p194, this may take a while depending on your cpu(s)... ruby-1.9.3-p194 - #downloading ruby-1.9.3-p194, this may take a while depending on your connection... ruby-1.9.3-p194 - #extracting ruby-1.9.3-p194 to /home/christopher/.rvm/src/ruby-1.9.3-p194 ruby-1.9.3-p194 - #extracted to /home/christopher/.rvm/src/ruby-1.9.3-p194 Skipping configure step, 'configure' does not exist, did autoreconf not run successfully? ruby-1.9.3-p194 - #compiling Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/make.log There has been an error while running make. Halting the installation. The log files are as follows: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/configure.log __rvm_log_command:32: permission denied: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/make.log make: *** No targets specified and no makefile found. Stop. arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/make.log make: *** No targets specified and no makefile found. Stop.

    Read the article

  • Moving Windows XP from ICH10R RAID 5 to single disk using Linux [migrated]

    - by tudor
    A friend's machine running Windows XP refused to boot recently which is running 3 SATA disks on RAID 5 (which was previously upgraded from RAID 1 not by me). I have determined there to be a disk failure. The disks have been replaced many times in the past few years. I wish to backup the RAID5 partition before I try anything to fix it. The RAID chipset used is ICH10R/DO. So, I plugged in an extra IDE drive and an Ubuntu USB key and looked at the RAID. The partitioning is a mess, but I did find at least one degraded but working RAID array with two partitions, one 79GB and the other 86GB. Then I: 1) Partitioned my IDE disk using fdisk to have a partition of 80GB and bootable, and marked as NTFS. 2) dd the contents of the array to the partition 3) disconnected everything else 4) inserted a Windows XP CD and ran fixboot, fixmbr, and bootcfg. They all run ok and claim that they worked. (e.g. bootcfg detects the Windows partition, fixboot returns saying that it was written correctly.) However, I'm still getting an error like "DISK FAILURE, BOOT DISK NOT FOUND". I have tried running the GRUB rescue disk, which also runs ok, but won't boot into Windows. It just stops with a flashing cursor after chainloader +1, boot. One clue may be that the partitions appear to be wack. One disk has a 79GB RAID partition on a 500GB drive with a offset, the second disk has a 320GB RAID partition across the whole drive. Additionally, the BIOS lists the RAID size as being 149GB. I don't see how this works. How are they even assembling the array when the partitions are so different? I have also tried running the Windows XP automated repair tool, but that didn't work either. I'm presuming this is something simple. Perhaps Windows is attempting to boot into RAID and, upon not finding it, simply crashing? Perhaps the 79GB partitions offset means that it's looking into the disk by that much? Please help!! To clarify: I want to make the single IDE disk bootable with a copy of the array so that I can prove/disprove that it's just that Windows has become corrupted, and use windows tools to correct it before attempting the same thing on the RAID array. That way I have a working backup and can show the process I used to fix it.

    Read the article

  • Single line diff

    - by Ollie Saunders
    Is there a diff tool I can run from the command line or maybe just an online webpage that will compare single lines? GitHub diffs do this but I'm don't want to publish things to GitHub just to read the diffs! This is the sort of thing I'm wanting to diff: a:2:{i:0;O:10:"FuTreeNode":6:{s:5:"outer";r:2;s:6:"inners";O:20:"FuLinkedTreeNodeList":2:{s:5:"outer";r:2;s:5:"items";a:0:{}}s:5:"value";s:3:"foo";s:4:"next";O:10:"FuTreeNode":6:{s:5:"outer";r:2;s:6:"inners";O:20:"FuLinkedTreeNodeList":2:{s:5:"outer";r:8;s:5:"items";a:0:{}}s:5:"value";s:3:"bar";s:4:"next";N;s:4:"prev";O:10:"FuTreeNode":6:{s:5:"outer";N;s:6:"inners";O:20:"FuLinkedTreeNodeList":2:{s:5:"outer";r:15;s:5:"items";a:0:{}}s:5:"value";s:3:"zim";s:4:"next";r:8;s:4:"prev";O:10:"FuTreeNode":6:{s:5:"outer";N;s:6:"inners";O:20:"FuLinkedTreeNodeList":2:{s:5:"outer";r:22;s:5:"items";a:0:{}}s:5:"value";s:3:"foo";s:4:"next";r:15;s:4:"prev";r:8;s:6:"ofList";O:12:"FuLinkedList":1:{s:5:"items";a:4:{i:0;r:2;i:1;r:8;i:2;r:22;i:3;r:15;}}}s:6:"ofList";r:30;}s:6:"ofList";O:20:"FuLinkedTreeNodeList":2:{s:5:"outer";r:2;s:5:"items";a:2:{i:0;r:2;i:1;r:8;}}}s:4:"prev";N;s:6:"ofList";r:37;}i:1;r:8;} a:2:{i:0;O:10:"FuTreeNode":6:{s:5:"outer";r:2;s:6:"inners";O:20:"FuLinkedTreeNodeList":2:{s:5:"outer";r:2;s:5:"items";a:0:{}}s:5:"value";s:3:"foo";s:4:"next";O:10:"FuTreeNode":6:{s:5:"outer";r:2;s:6:"inners";O:20:"FuLinkedTreeNodeList":2:{s:5:"outer";r:8;s:5:"items";a:0:{}}s:5:"value";s:3:"bar";s:4:"next";N;s:4:"prev";O:10:"FuTreeNode":6:{s:5:"outer";N;s:6:"inners";O:20:"FuLinkedTreeNodeList":2:{s:5:"outer";r:15;s:5:"items";a:0:{}}s:5:"value";s:3:"zim";s:4:"next";r:8;s:4:"prev";O:10:"FuTreeNode":6:{s:5:"outer";N;s:6:"inners";O:20:"FuLinkedTreeNodeList":2:{s:5:"outer";r:22;s:5:"items";a:0:{}}s:5:"value";s:3:"foo";s:4:"next";r:15;s:4:"prev";r:8;s:6:"ofList";O:12:"FuLinkedList":1:{s:5:"items";a:4:{i:0;r:2;i:1;r:8;i:2;r:22;i:3;r:15;}}}s:6:"ofList";r:30;}s:6:"ofList";O:20:"FuLinkedTreeNodeList":2:{s:5:"outer";r:2;s:5:"items";a:2:{i:0;r:2;i:1;r:8;}}}s:4:"prev";N;s:6:"ofList";r:37;}i:1;r:22;} I'm on Mac OS X.

    Read the article

  • merge cells in one

    - by alkitbi
    $query1 = "select * from linkat_link where emailuser='$email2' or linkname='$domain_name2' ORDER BY date desc LIMIT $From,$PageNO"; now sample show : <table border="1" width="100%"> <tr> <td>linkid</td> <td>catid</td> <td>linkdes</td> <td>price</td> </tr> <tr> <td>1</td> <td>1</td> <td>&nbsp;domain name</td> <td>100</td> </tr> <tr> <td>2</td> <td>1</td> <td>&nbsp;hosting&nbsp; plan one</td> <td>40</td> </tr> <tr> <td>3</td> <td>2</td> <td>&nbsp;domain name</td> <td>20</td> </tr> </table> How do I merge two or more  When there are numbers of cells same on the Table in this way sample? <table border="1" width="100%"> <tr> <td>catid</td> <td>linkdes</td> <td>price</td> </tr> <tr> <td>1</td> <td>linkid(1)- domain namelinkid(2)- hosting&nbsp; plan one</td> <td>10040</td> </tr> <tr> <td>2</td> <td>&nbsp;domain name</td> <td>20</td> </tr> </table>

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Css code for the table

    - by Hulk
    Can some one please tell me how to make this table look better <table> <tr><th>Name</th><th>Address</th><th>occupation</th></tr> <tr><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td></tr> <tr><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td></tr> <tr><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td></tr> </table> This table is dynamically generated and meaning there could me more rows with td containing textarea. Can any one please sugesst a a css code to beautify this table or may be a link Thanks..

    Read the article

  • SSIS Configuration error: Cannot retrieve configuration table schema

    - by Glenn M
    I'm trying to add a simple configuration to a SSIS package, of type SQL Server, so stored in a table. At the end of the wizard, when it goes to try and write a new row to the nominated table to store the configuration it fails with the error: TITLE: Microsoft Visual Studio Could not complete wizard actions. Cannot retrieve configuration table schema. (Microsoft.DataTransformationServices.Wizards) I can't seem to resolve this. The configuration connection has full permissions on the table, and it sees it and can read from it as it reports there is no current data for the filter I provide. It just wont write to it. A Google search of the error message above in quotes returns literally no hits! Any suggestions? Glenn

    Read the article

  • Wrong figures numbering - Package caption Error: Continued 'figure' after 'table'

    - by Eduardo
    Hello I am having a problem with the numbering of figures using Latex, I am getting this error message: Package caption Error: Continued 'figure' after 'table' This is my code: \begin{table} \centering \subfloat[Tabla1\label{tab:Tabla1}]{ \small \begin{tabular}{ | c | c | c | c | c |} \hline \multicolumn{5}{|c|}{\textbf{Tabla 1}} \\ \hline ... ... \end{tabular} } \qquad \subfloat[Tabla2\label{tab:Tabla2}]{ \small \begin{tabular}{ | c | c | c | c | c |} \hline \multicolumn{5}{|c|}{\textbf{Tabla 2}} \\ \hline ... ... \end{tabular} } \caption{These are tables} \label{tab:Tables} \end{table} \begin{figure} \centering \subfloat[][Figure 1]{\label{fig:fig1}\includegraphics[width = 14cm]{fig1}} \qquad \subfloat[][Figure 2]{\label{fig:fig2}\includegraphics[width = 14cm]{fig2}} \end{figure} \begin{figure}[t] \ContinuedFloat \subfloat[][Figure 2]{\label{fig:fig3}\includegraphics[width = 14cm]{fig3}} \caption{Those are figures} \label{fig:Figures} \end{figure} \newpage What I want to do, it is to have this configuration: Table Table Figure 1 Figure 2 Figure 3 Since Figure 1 and Figure 2 are too big to fit vertically I want the Figure 3 to be alone in another page that's why I have the \ContinuedFloat. Externally looks fine but the problem is the numbering, I am getting for the Figures the number 5.2, that is the same number that a Figure I have before (The correct number should be 5.3). However if I try to reference the figures: \ref{fig:fig1}, \ref{fig:fig2} y \ref{fig:fig2} I get: 5.3a, 5.3b y 5.2c The two first right the last one wrong. I have been stuck with this for hours any ideas?. Thans a lot in advance

    Read the article

  • [CakePHP] Can not Bake table model, controller and view

    - by user198003
    I developed small CakePHP application, and now I want to add one more table (in fact, model/controller/view) into system, named notes. I had already created a table of course. But when I run command cake bake model, I do not get table Notes on the list. I can add it manually, but after that I get some errors when running cake bake controller and cake bake view. Can you give me some clue why I have those problems, and how to add that new model?

    Read the article

  • [Linq to SQL] Multiple foreign keys to the same table

    - by cdonner
    I have a reference table with all sorts of controlled value lookup data for gender, address type, contact type, etc. Many tables have multiple foreign keys to this reference table I also have many-to-many association tables that have two foreign keys to the same table. Unfortunately, when these tables are pulled into a Linq model and the DBML is generated, SQLMetal does not look at the names of the foreign key columns, or the names of the constraints, but only at the target table. So I end up with members called Reference1, Reference2, ... not very maintenance-friendly. Example: <Association Name="tb_reference_tb_account" Member="tb_reference" <====== ThisKey="shipping_preference_type_id" OtherKey="id" Type="tb_reference" IsForeignKey="true" /> <Association Name="tb_reference_tb_account1" Member="tb_reference1" <====== ThisKey="status_type_id" OtherKey="id" Type="tb_reference" IsForeignKey="true" /> I can go into the DBML and manually change the member names, of course, but this would mean I can no longer round-trip my database schema. This is not an option at the current stage of the model, which is still evolving. Splitting the reference table into n individual tables is also not desirable. I can probably write a script that runs against the XML after each generation and replaces the member name with something derived from ThisKey (since I adhere to a naming convention for these types of keys). Has anybody found a better solution to this problem?

    Read the article

  • linq2sql : get generic type of table

    - by benpage
    i think this is a simple question but I've searched around and can't seem to find an answer easily. if you have var list = List<int>(); ... fill list ... and you want to get the generic type in list, i realise you could just type: var t = list.FirstOrDefault().GetType(); Is there another way to do this via just the list, rather than referring to the enumeration? Reason is, i have a System.Data.Linq.Table<TABLE1> and what i want to do is get the type of TABLE1 from it. so: var table = new DataContext().TABLE1s; // this is Table<TABLE1> var tableType = table.GetType().SomeMethod(); // i want tableType to equal TABLE1.GetType()

    Read the article

  • divs not displaying as a table

    - by CoffeeCode
    i made a css: DIV.TableContainer { display: table; background-color:Aqua; } DIV.TableRow { display: table-row; } DIV.TableCell { display: table-cell; } html page: <div class="TableContainer"> <div class="TableRow"> <div class="TableCell"> <h4>Left Col</h4> <p>...</p> </div> <div class="TableCell"> <h4>Right Col</h4> <p>...</p> </div> </div> </div> but it doesnt display as a table. have i missed something???

    Read the article

  • (fluent) nhibernate conditional table mapping strategy

    - by grenade
    I have no control over database schema and have the following (simplified) table structure: CityProfile Id Name CountryProfile Id Name RegionProfile Id Name I have a .Net enum and class encapsulating the lot: public enum Scope { Region, Country, City } public class Profile { public Scope Scope { get; set; } public int Id { get; set; } public string Name { get; set; } } I am looking for a mechanism that allows me to map to the correct table, something like: public class ProfileMap : ClassMap<Profile> { public ProfileMap() { switch (x => x.Scope) { // <--Invalid code here! case Scope.City: Table("CityProfile"); break; case Scope.Country: Table("CountryProfile"); break; case Scope.Region: Table("RegionProfile"); break; } Id(x => x.Id); Map(x => x.Name); } } Or have I approached this wrong?

    Read the article

  • How can parallelism affect number of results?

    - by spender
    I have a fairly complex query that looks something like this: create table Items(SomeOtherTableID int,SomeField int) create table SomeOtherTable(Id int,GroupID int) with cte1 as ( select SomeOtherTableID,COUNT(*) SubItemCount from Items t where t.SomeField is not null group by SomeOtherTableID ),cte2 as ( select tc.SomeOtherTableID,ROW_NUMBER() over (partition by a.GroupID order by tc.SubItemCount desc) SubItemRank from Items t inner join SomeOtherTable a on a.Id=t.SomeOtherTableID inner join cte1 tc on tc.SomeOtherTableID=t.SomeOtherTableID where t.SomeField is not null ),cte3 as ( select SomeOtherTableID from cte2 where SubItemRank=1 ) select * from cte3 t1 inner join cte3 t2 on t1.SomeOtherTableID<t2.SomeOtherTableID option (maxdop 1) The query is such that cte3 is filled with 6222 distinct results. In the final select, I am performing a cross join on cte3 with itself, (so that I can compare every value in the table with every other value in the table at a later point). Notice the final line : option (maxdop 1) Apparently, this switches off parallelism. So, with 6222 results rows in cte3, I would expect (6222*6221)/2, or 19353531 results in the subsequent cross joining select, and with the final maxdop line in place, that is indeed the case. However, when I remove the maxdop line, the number of results jumps to 19380454. I have 4 cores on my dev box. WTF? Can anyone explain why this is? Do I need to reconsider previous queries that cross join in this way?

    Read the article

  • Create System.Data.Linq.Table in Code for Testing

    - by S. DePouw
    I have an adapter class for Linq-to-Sql: public interface IAdapter : IDisposable { Table<Data.User> Activities { get; } } Data.User is an object defined by Linq-to-Sql pointing to the User table in persistence. The implementation for this is as follows: public class Adapter : IAdapter { private readonly SecretDataContext _context = new SecretDataContext(); public void Dispose() { _context.Dispose(); } public Table<Data.User> Users { get { return _context.Users; } } } This makes mocking the persistence layer easy in unit testing, as I can just return whatever collection of data I want for Users (Rhino.Mocks): Expect.Call(_adapter.Users).Return(users); The problem is that I cannot create the object 'users' since the constructors are not accessible and the class Table is sealed. One option I tried is to just make IAdapter return IEnumerable or IQueryable, but the problem there is that I then do not have access to the methods ITable provides (e.g. InsertOnSubmit()). Is there a way I can create the fake Table in the unit test scenario so that I may be a happy TDD developer?

    Read the article

  • latex list environment inside the tabular environment: extra line at top preventing alignment

    - by Usagi
    Hello good people of stackoverflow. I have a LaTeX question that is bugging me. I have been trying to get a list environment to appear correctly inside the tabular environment. So far I have gotten everything to my liking except one thing: the top of the list does not align with other entries in the table, in fact it looks like it adds one line above the list... I would like to have these lists at the top. This is what I have, a custom list environment: \newenvironment{flushemize}{ \begin{list}{$\bullet$} {\setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\topsep}{0pt} \setlength{\leftmargin}{12pt}}}{\end{list}} Renamed ragged right: \newcommand{\rr}{\raggedright} and here is my table: \begin{table}[H]\caption{Tank comparisons}\label{tab:tanks} \centering \rowcolors{2}{white}{tableShade} \begin{tabular}{p{1in}p{1.5in}p{1.5in}rr} \toprule {\bf Material} & {\bf Pros} & {\bf Cons} & {\bf Size} & {\bf Cost} \\ \midrule \rr Reinforced concrete &\rr \begin{flushemize}\item Strong \item Secure \end{flushemize}&\rr \begin{flushemize}\item Prone to leaks \item Relatively expensive to install \item Heavy \end{flushemize} & 100,000 gal & \$299,400 \\ \rr Steel & \begin{flushemize}\item Strong \item Secure \end{flushemize} & \begin{flushemize}\item Relatively expensive to install \item Heavy \item Require painting to prevent rusting \end{flushemize} & 100,000 gal & \$130,100 \\ \rr Polypropylene & \begin{flushemize}\item Easy to install \item Mobile \item Inexpensive \item Prefabricated \end{flushemize} & \begin{flushemize}\item Relatively insecure \item Max size available 10,000 gal \end{flushemize} & 10,000 gal & \$5,000 \\ \rr Wood & \begin{flushemize}\item Easy to install \item Mobile \item Cheap to install \end{flushemize} & \begin{flushemize}\item Prone to rot \item Must remain full once constructed \end{flushemize} & 100,000 gal & \$86,300\\ \bottomrule \end{tabular} \end{table} Thank you for any advice :)

    Read the article

  • MS SQL Bridge Table Constraints

    - by greg
    Greetings - I have a table of Articles and a table of Categories. An Article can be used in many Categories, so I have created a table of ArticleCategories like this: BridgeID int (PK) ArticleID int CategoryID int Now, I want to create constraints/relationships such that the ArticleID-CategoryID combinations are unique AND that the IDs must exist in the respective primary key tables (Articles and Categories). I have tried using both VS2008 Server Explorer and Enterprise Manager (SQL-2005) to create the FK relationships, but the results always prevent Duplicate ArticleIDs in the bridge table, even though the CategoryID is different. I am pretty sure I am doing something obviously wrong, but I appear to have a mental block at this point. Can anyone tell me please how should this be done? Greaty appreciated!

    Read the article

  • SQL: convert tokens in a string or elements of an array into rows of a table

    - by slowpoison
    Is there a simple way in SQL to convert a string or an array to rows of a table? For example, let's stay the string is 'a,b,c,d,e,f,g'. I'd prefer an SQL statement that takes that string, splits it at commas and inserts the resulting strings into a table. In PostgreSQL I can use regexp_split_to_array() and split the string into an array. So, if you know a way to insert an array's elements as rows into a table, that would work too.

    Read the article

  • Change table row display property

    - by Idsa
    I have an html page with a table that contains a hidden row: <table> <tr id="hiddenTr" style="display:none"> </tr> </table> I need to make it visible at client side using jquery. I tried this $('#hiddenTr').show(); and this $('#hiddenTr').css('display', 'table-row'); Both implementations don't work for me. Furthemore the second one is not crossbrowser.

    Read the article

  • Limiting choices from an intermediary ManyToMany junction table in Django

    - by Matthew Rankin
    Background I've created three Django models—Inventory, SalesOrder, and Invoice—to model items in inventory, sales orders for those items, and invoices for a particular sales order. Each sales order can have multiple items, so I've used an intermediary junction table—SalesOrderItems—using the through argument for the ManyToManyField. Also, partial billing of a sales orders is allowed, so I've created a ForeignKey in the Invoice model related to the SalesOrder model, so that a particular sales order can have multiple invoices. Here's where I deviate from what I've normally seen. Instead of relating the Invoice model to the Item model via a ManyToManyField, I've related the Invoice model to the SalesOrderItem intermediary junction table through the intermediary junction table InvoiceItem. I've done this because it better models reality—our invoices are tied to sales orders and can only include items that are tied to that sales order as opposed to any item in inventory. I will admit that it does seem strange having the intermediary junction table of a ManyToManyField related to the intermediary junction table of another ManyToManyField. Question How can I limit the choices available for the invoice_items in the Invoice model to just the sales_order_items of the SalesOrder model for that particular Invoice? (I tried using limit_choices_to= {'sales_order': self.invoice.sales_order}) as part of the item = models.ForeignKey(SalesOrderItem) in the InvoiceItem model, but that didn't work. Am I correct in thinking that limiting the choices for the invoice_items should be handled in the model instead of in a form? Code class Item(models.Model): item_num = models.SlugField(unique=True) default_price = models.DecimalField(max_digits=10, decimal_places=2, blank=True, null=True) class SalesOrderItem(models.Model): item = models.ForeignKey(Item) sales_order = models.ForeignKey('SalesOrder') unit_price = models.DecimalField(max_digits=10, decimal_places=2) quantity = models.DecimalField(max_digits=10, decimal_places=4) class SalesOrder(models.Model): customer = models.ForeignKey(Party) so_num = models.SlugField(max_length=40, unique=True) sales_order_items = models.ManyToManyField(Item, through=SalesOrderItem) class InvoiceItem(models.Model): item = models.ForeignKey(SalesOrderItem) invoice = models.ForeignKey('Invoice') unit_price = models.DecimalField(max_digits=10, decimal_places=2) quantity = models.DecimalField(max_digits=10, decimal_places=4) class Invoice(models.Model): invoice_num = models.SlugField(max_length=25) sales_order = models.ForeignKey(SalesOrder) invoice_items = models.ManyToManyField(SalesOrderItem, through='InvoiceItem')

    Read the article

  • Accessing SQL server Table is slow -very few data inside

    - by Joseph
    Dear all I have a temp table ,datas keep on coming in and going out. now a days even if there is very few records also if we select ,its taking so long. i cant put index on this table because its a Temp table. The only way i found that drop the table and recreate it.its working very fine. any idea why this is happening?is it like some kind of fragmentation?if there is index ,then we can check the frgment,but if there is no index then waht to do. we are using sql server 2008 64 bit thanks Joseph

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >