Search Results

Search found 25113 results on 1005 pages for 'grouped table'.

Page 59/1005 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • testdisk - recover partition table

    - by Evaggelos Balaskas
    I destroyed my partition table of my laptop. Testdisk reports the below Disk laptop.img - 250 GB / 232 GiB - CHS 30402 255 63 (RO) Partition Start End Size in sectors >P MS Data 435868 456606 20739 [NO NAME] P MS Data 19232600 19235479 2880 [NO NAME] D MS Data 41945087 83890143 41945057 D MS Data 57151486 168579069 111427584 D MS Data 67637246 141037565 73400320 D MS Data 151523326 193466365 41943040 D MS Data 170617328 170618223 896 D MS Data 170631168 170634047 2880 D MS Data 171338232 171344405 6174 [Boot] D MS Data 172008235 172231918 223684 [NO NAME] P MS Data 193466368 214437887 20971520 D MS Data 217321375 225321678 8000304 [root] D MS Data 224923646 308809725 83886080 [media] D MS Data 308809728 420237311 111427584 D MS Data 418910206 481824765 62914560 [vmimages] my partition table had 3 Primary Partitions. 1. WinXP Home 2. /boot 3. LVM inside LVM i had 9 or 10 LVM partitions One of them was my home (encrypted with luks) testdisk cant recover my partition table or any other partition. Partitions with [P] doesnt have any useful data. I want to use dd to extract the partitions and try to recover as many files i can. Any ideas of how i can extract eg. the [root] lvm partition from the above testdisk report ? I am afraid that my disk was also corrupted.

    Read the article

  • Spring-mvc project can't select from a particular mysql table

    - by Dan Ray
    I'm building a Spring-mvc project (using JPA and Hibernate for DB access) that is running just great locally, on my dev box, with a local MySQL database. Now I'm trying to put a snapshot up on a staging server for my client to play with, and I'm having trouble. Tomcat (after some wrestling) deploys my war file without complaint, and I can get some response from the application over the browser. When I hit my main page, which is behind Spring Security authentication, it redirects me to the login page, which works perfectly. I have Security configured to query the database for user details, and that works fine. In fact, a change to a password in the database is reflected in the behavior of the login form, so I'm confident it IS reaching the database and querying the user table. Once authenticated, we go to the first "real" page of the app, and I get a "data access failure" error. The server's console log gets this line (redacted): ERROR org.hibernate.util.JDBCExceptionReporter - SELECT command denied to user 'myDbUser'@'localhost' for table 'asset' However, if I go to MySQL from the shell using exactly the same creds, I have no problem at all selecting from the asset table: [development@tomcat01stg]$ mysql -u myDbUser -pmyDbPwd dbName ... mysql> \s -------------- mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 Connection id: 199 Current database: dbName Current user: myDbUser@localhost ... UNIX socket: /var/lib/mysql/mysql.sock -------------- mysql> select count(*) from asset; +----------+ | count(*) | +----------+ | 19 | +----------+ 1 row in set (0.00 sec) I've broken down my MySQL access settings, cleaned out the user and re-run the grant commands, set up a version of the user from 'localhost' and another from '%', making sure to flush permissions.... Nothing is changing the behavior of this thing. What gives?

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • Cumulative average using data from multiple rows in an excel table

    - by Aaron E
    I am trying to calculate a cumulative average column on a table I'm making in excel. I use the totals row for the ending cumulative average, but I would like to add a column that gives a cumulative average for each row up to that point. So, if I have 3 rows I want each row to have a column giving the average up to that row and then the ending cumulative average in the totals row. Right now I can't figure this out because I'd be having to reference in a formula rows above and below the current row and I'm unsure about how to go about it because it's a table and not just cells. If it was just cells then I know how to do the formula and copy it down each row, but being that the formula I need depends on whether or not a new row in the table is added or not I keep thinking that my formula would be something like: (Completion rate row 1/n) where n is the number of rows up to that point, here row 1, then ((Completion rate row 1 + Completion rate row 2)/n) for row 2 so n=2, and so on for each new row added. Please advise.

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Fixing damaged partition table

    - by dr4cul4
    This is continuation of Recover Extended Partition , but this time I have different problem related partition table it self. I managed to restore partition that I needed and backed up files that were crucial to me (at least those that I had space to store somewhere) OK now get to the problem. My partition table is corrupted, booting RIP Linux I can mount it in truecrypt (and other ones that recovered), but that's basically it. When I launch GParted I have unallocated drive. GParted Dev info: Device Information Model: ATA ST2000DL003-9VT1 Size: 1.82TiB Path: /dev/sda Partition table: unrecognized Heads: 255 Sectors/track: 63 Cylinders: 243201 Total Sectors: 3907029168 Sector size: 512 When I check information on unallocated space I get: File system: unallocated Size: 1.82TiB First sector: 0 Last sector: 3907029167 Total sectors: 3907029168 Warning: Can't have a partition outside the disk! Now the output of testdisc (Analyze): TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Partition Start End Size in sectors > 1 P Linux 13132 242 39 16353 233 8 51744768 2 E extended LBA 16807 223 1 243201 254 63 3637021626 No partition is bootable 5 L Linux 16807 223 57 20430 39 25 58191872 X extended 20430 70 1 243201 78 13 3578816632 Invalid NTFS or EXFAT boot 6 L HPFS - NTFS 20430 71 58 243201 78 13 3578816512 6 LNext Now fdisk: # fdisk -l /dev/sda Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00039cd0 Device Boot Start End Blocks Id System /dev/sda1 210980864 262725631 25872384 83 Linux /dev/sda2 270018504 3907040129 1818510813 f W95 Ext'd (LBA) /dev/sda5 270018560 328210431 29095936 83 Linux /dev/sda6 328212480 3907028991 1789408256 7 HPFS/NTFS/exFAT Now I would like to fix that to arrange partitions correctly, but I have no idea which tool is capable of fixing that (tried, a few, some of them offered fixing, but it was to risky at the moment - still backing up data).

    Read the article

  • Problems creating a functioning table

    - by Hoser
    This is a pretty simple SQL query I would assume, but I'm having problems getting it to work. if (object_id('#InfoTable')is not null) Begin Drop Table #InfoTable End create table #InfoTable (NameOfObject varchar(50), NameOfCounter varchar(50), SampledValue float(30), DayStamp datetime) insert into #InfoTable(NameOfObject, NameOfCounter, SampledValue, DayStamp) select vPerformanceRule.ObjectName AS NameOfObject, vPerformanceRule.CounterName AS NameOfCounter, Perf.vPerfRaw.SampleValue AS SampledValue, Perf.vPerfHourly.DateTime AS DayStamp from vPerformanceRule, vPerformanceRuleInstance, Perf.vPerfHourly, Perf.vPerfRaw where (ObjectName like 'Logical Disk' and CounterName like '% Free Space' AND SampleValue > 95 AND SampleValue < 100) order by DayStamp desc select NameOfObject, NameOfCounter, SampledValue, DayStamp from #InfoTable Drop Table #InfoTable I've tried various other forms of syntax, but no matter what I do, I get these error messages. Msg 207, Level 16, State 1, Line 10 Invalid column name 'NameOfObject'. Msg 207, Level 16, State 1, Line 10 Invalid column name 'NameOfCounter'. Msg 207, Level 16, State 1, Line 10 Invalid column name 'SampledValue'. Msg 207, Level 16, State 1, Line 10 Invalid column name 'DayStamp'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'NameOfObject'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'NameOfCounter'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'SampledValue'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'DayStamp'. Line 10 is the first 'insert into' line, and line 22 is the second select line. Any ideas?

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • merge cells in one

    - by alkitbi
    $query1 = "select * from linkat_link where emailuser='$email2' or linkname='$domain_name2' ORDER BY date desc LIMIT $From,$PageNO"; now sample show : <table border="1" width="100%"> <tr> <td>linkid</td> <td>catid</td> <td>linkdes</td> <td>price</td> </tr> <tr> <td>1</td> <td>1</td> <td>&nbsp;domain name</td> <td>100</td> </tr> <tr> <td>2</td> <td>1</td> <td>&nbsp;hosting&nbsp; plan one</td> <td>40</td> </tr> <tr> <td>3</td> <td>2</td> <td>&nbsp;domain name</td> <td>20</td> </tr> </table> How do I merge two or more  When there are numbers of cells same on the Table in this way sample? <table border="1" width="100%"> <tr> <td>catid</td> <td>linkdes</td> <td>price</td> </tr> <tr> <td>1</td> <td>linkid(1)- domain namelinkid(2)- hosting&nbsp; plan one</td> <td>10040</td> </tr> <tr> <td>2</td> <td>&nbsp;domain name</td> <td>20</td> </tr> </table>

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Css code for the table

    - by Hulk
    Can some one please tell me how to make this table look better <table> <tr><th>Name</th><th>Address</th><th>occupation</th></tr> <tr><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td></tr> <tr><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td></tr> <tr><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td><td><textarea rows=10 cols=15></td></tr> </table> This table is dynamically generated and meaning there could me more rows with td containing textarea. Can any one please sugesst a a css code to beautify this table or may be a link Thanks..

    Read the article

  • SSIS Configuration error: Cannot retrieve configuration table schema

    - by Glenn M
    I'm trying to add a simple configuration to a SSIS package, of type SQL Server, so stored in a table. At the end of the wizard, when it goes to try and write a new row to the nominated table to store the configuration it fails with the error: TITLE: Microsoft Visual Studio Could not complete wizard actions. Cannot retrieve configuration table schema. (Microsoft.DataTransformationServices.Wizards) I can't seem to resolve this. The configuration connection has full permissions on the table, and it sees it and can read from it as it reports there is no current data for the filter I provide. It just wont write to it. A Google search of the error message above in quotes returns literally no hits! Any suggestions? Glenn

    Read the article

  • Wrong figures numbering - Package caption Error: Continued 'figure' after 'table'

    - by Eduardo
    Hello I am having a problem with the numbering of figures using Latex, I am getting this error message: Package caption Error: Continued 'figure' after 'table' This is my code: \begin{table} \centering \subfloat[Tabla1\label{tab:Tabla1}]{ \small \begin{tabular}{ | c | c | c | c | c |} \hline \multicolumn{5}{|c|}{\textbf{Tabla 1}} \\ \hline ... ... \end{tabular} } \qquad \subfloat[Tabla2\label{tab:Tabla2}]{ \small \begin{tabular}{ | c | c | c | c | c |} \hline \multicolumn{5}{|c|}{\textbf{Tabla 2}} \\ \hline ... ... \end{tabular} } \caption{These are tables} \label{tab:Tables} \end{table} \begin{figure} \centering \subfloat[][Figure 1]{\label{fig:fig1}\includegraphics[width = 14cm]{fig1}} \qquad \subfloat[][Figure 2]{\label{fig:fig2}\includegraphics[width = 14cm]{fig2}} \end{figure} \begin{figure}[t] \ContinuedFloat \subfloat[][Figure 2]{\label{fig:fig3}\includegraphics[width = 14cm]{fig3}} \caption{Those are figures} \label{fig:Figures} \end{figure} \newpage What I want to do, it is to have this configuration: Table Table Figure 1 Figure 2 Figure 3 Since Figure 1 and Figure 2 are too big to fit vertically I want the Figure 3 to be alone in another page that's why I have the \ContinuedFloat. Externally looks fine but the problem is the numbering, I am getting for the Figures the number 5.2, that is the same number that a Figure I have before (The correct number should be 5.3). However if I try to reference the figures: \ref{fig:fig1}, \ref{fig:fig2} y \ref{fig:fig2} I get: 5.3a, 5.3b y 5.2c The two first right the last one wrong. I have been stuck with this for hours any ideas?. Thans a lot in advance

    Read the article

  • [CakePHP] Can not Bake table model, controller and view

    - by user198003
    I developed small CakePHP application, and now I want to add one more table (in fact, model/controller/view) into system, named notes. I had already created a table of course. But when I run command cake bake model, I do not get table Notes on the list. I can add it manually, but after that I get some errors when running cake bake controller and cake bake view. Can you give me some clue why I have those problems, and how to add that new model?

    Read the article

  • [Linq to SQL] Multiple foreign keys to the same table

    - by cdonner
    I have a reference table with all sorts of controlled value lookup data for gender, address type, contact type, etc. Many tables have multiple foreign keys to this reference table I also have many-to-many association tables that have two foreign keys to the same table. Unfortunately, when these tables are pulled into a Linq model and the DBML is generated, SQLMetal does not look at the names of the foreign key columns, or the names of the constraints, but only at the target table. So I end up with members called Reference1, Reference2, ... not very maintenance-friendly. Example: <Association Name="tb_reference_tb_account" Member="tb_reference" <====== ThisKey="shipping_preference_type_id" OtherKey="id" Type="tb_reference" IsForeignKey="true" /> <Association Name="tb_reference_tb_account1" Member="tb_reference1" <====== ThisKey="status_type_id" OtherKey="id" Type="tb_reference" IsForeignKey="true" /> I can go into the DBML and manually change the member names, of course, but this would mean I can no longer round-trip my database schema. This is not an option at the current stage of the model, which is still evolving. Splitting the reference table into n individual tables is also not desirable. I can probably write a script that runs against the XML after each generation and replaces the member name with something derived from ThisKey (since I adhere to a naming convention for these types of keys). Has anybody found a better solution to this problem?

    Read the article

  • linq2sql : get generic type of table

    - by benpage
    i think this is a simple question but I've searched around and can't seem to find an answer easily. if you have var list = List<int>(); ... fill list ... and you want to get the generic type in list, i realise you could just type: var t = list.FirstOrDefault().GetType(); Is there another way to do this via just the list, rather than referring to the enumeration? Reason is, i have a System.Data.Linq.Table<TABLE1> and what i want to do is get the type of TABLE1 from it. so: var table = new DataContext().TABLE1s; // this is Table<TABLE1> var tableType = table.GetType().SomeMethod(); // i want tableType to equal TABLE1.GetType()

    Read the article

  • Groupted table view cells not loading.

    - by Tejaswi Yerukalapudi
    Hi, I'm working on creating a grouped table view. The data is being loaded alright, but in the grouped view there are a lot of white empty spaces. They get populated after I scroll up and down a few times. Help? Here's my getCellForRowIndexAtPath method: static NSString *Id= @"CustomDiagChargeID"; CustomCellDiagCharges *cell = (CustomCellDiagCharges *)[tableView dequeueReusableCellWithIdentifier:Id]; if(cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"CustomCellDiagCharges" owner:self options:nil]; for (id oneObject in nib) { if([oneObject isKindOfClass:[CustomCellDiagCharges class]]) cell = (CustomCellDiagCharges *) oneObject; } } NSUInteger row = [indexPath row]; DiagDetails *rowData = [preferences getDiagElementAt:indexPath.section row:row]; cell.code.text = rowData.ICD9Code; cell.desc.text = rowData.ICD9Desc; return cell; Thanks, Teja.

    Read the article

  • divs not displaying as a table

    - by CoffeeCode
    i made a css: DIV.TableContainer { display: table; background-color:Aqua; } DIV.TableRow { display: table-row; } DIV.TableCell { display: table-cell; } html page: <div class="TableContainer"> <div class="TableRow"> <div class="TableCell"> <h4>Left Col</h4> <p>...</p> </div> <div class="TableCell"> <h4>Right Col</h4> <p>...</p> </div> </div> </div> but it doesnt display as a table. have i missed something???

    Read the article

  • (fluent) nhibernate conditional table mapping strategy

    - by grenade
    I have no control over database schema and have the following (simplified) table structure: CityProfile Id Name CountryProfile Id Name RegionProfile Id Name I have a .Net enum and class encapsulating the lot: public enum Scope { Region, Country, City } public class Profile { public Scope Scope { get; set; } public int Id { get; set; } public string Name { get; set; } } I am looking for a mechanism that allows me to map to the correct table, something like: public class ProfileMap : ClassMap<Profile> { public ProfileMap() { switch (x => x.Scope) { // <--Invalid code here! case Scope.City: Table("CityProfile"); break; case Scope.Country: Table("CountryProfile"); break; case Scope.Region: Table("RegionProfile"); break; } Id(x => x.Id); Map(x => x.Name); } } Or have I approached this wrong?

    Read the article

  • Create System.Data.Linq.Table in Code for Testing

    - by S. DePouw
    I have an adapter class for Linq-to-Sql: public interface IAdapter : IDisposable { Table<Data.User> Activities { get; } } Data.User is an object defined by Linq-to-Sql pointing to the User table in persistence. The implementation for this is as follows: public class Adapter : IAdapter { private readonly SecretDataContext _context = new SecretDataContext(); public void Dispose() { _context.Dispose(); } public Table<Data.User> Users { get { return _context.Users; } } } This makes mocking the persistence layer easy in unit testing, as I can just return whatever collection of data I want for Users (Rhino.Mocks): Expect.Call(_adapter.Users).Return(users); The problem is that I cannot create the object 'users' since the constructors are not accessible and the class Table is sealed. One option I tried is to just make IAdapter return IEnumerable or IQueryable, but the problem there is that I then do not have access to the methods ITable provides (e.g. InsertOnSubmit()). Is there a way I can create the fake Table in the unit test scenario so that I may be a happy TDD developer?

    Read the article

  • How can parallelism affect number of results?

    - by spender
    I have a fairly complex query that looks something like this: create table Items(SomeOtherTableID int,SomeField int) create table SomeOtherTable(Id int,GroupID int) with cte1 as ( select SomeOtherTableID,COUNT(*) SubItemCount from Items t where t.SomeField is not null group by SomeOtherTableID ),cte2 as ( select tc.SomeOtherTableID,ROW_NUMBER() over (partition by a.GroupID order by tc.SubItemCount desc) SubItemRank from Items t inner join SomeOtherTable a on a.Id=t.SomeOtherTableID inner join cte1 tc on tc.SomeOtherTableID=t.SomeOtherTableID where t.SomeField is not null ),cte3 as ( select SomeOtherTableID from cte2 where SubItemRank=1 ) select * from cte3 t1 inner join cte3 t2 on t1.SomeOtherTableID<t2.SomeOtherTableID option (maxdop 1) The query is such that cte3 is filled with 6222 distinct results. In the final select, I am performing a cross join on cte3 with itself, (so that I can compare every value in the table with every other value in the table at a later point). Notice the final line : option (maxdop 1) Apparently, this switches off parallelism. So, with 6222 results rows in cte3, I would expect (6222*6221)/2, or 19353531 results in the subsequent cross joining select, and with the final maxdop line in place, that is indeed the case. However, when I remove the maxdop line, the number of results jumps to 19380454. I have 4 cores on my dev box. WTF? Can anyone explain why this is? Do I need to reconsider previous queries that cross join in this way?

    Read the article

  • latex list environment inside the tabular environment: extra line at top preventing alignment

    - by Usagi
    Hello good people of stackoverflow. I have a LaTeX question that is bugging me. I have been trying to get a list environment to appear correctly inside the tabular environment. So far I have gotten everything to my liking except one thing: the top of the list does not align with other entries in the table, in fact it looks like it adds one line above the list... I would like to have these lists at the top. This is what I have, a custom list environment: \newenvironment{flushemize}{ \begin{list}{$\bullet$} {\setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\topsep}{0pt} \setlength{\leftmargin}{12pt}}}{\end{list}} Renamed ragged right: \newcommand{\rr}{\raggedright} and here is my table: \begin{table}[H]\caption{Tank comparisons}\label{tab:tanks} \centering \rowcolors{2}{white}{tableShade} \begin{tabular}{p{1in}p{1.5in}p{1.5in}rr} \toprule {\bf Material} & {\bf Pros} & {\bf Cons} & {\bf Size} & {\bf Cost} \\ \midrule \rr Reinforced concrete &\rr \begin{flushemize}\item Strong \item Secure \end{flushemize}&\rr \begin{flushemize}\item Prone to leaks \item Relatively expensive to install \item Heavy \end{flushemize} & 100,000 gal & \$299,400 \\ \rr Steel & \begin{flushemize}\item Strong \item Secure \end{flushemize} & \begin{flushemize}\item Relatively expensive to install \item Heavy \item Require painting to prevent rusting \end{flushemize} & 100,000 gal & \$130,100 \\ \rr Polypropylene & \begin{flushemize}\item Easy to install \item Mobile \item Inexpensive \item Prefabricated \end{flushemize} & \begin{flushemize}\item Relatively insecure \item Max size available 10,000 gal \end{flushemize} & 10,000 gal & \$5,000 \\ \rr Wood & \begin{flushemize}\item Easy to install \item Mobile \item Cheap to install \end{flushemize} & \begin{flushemize}\item Prone to rot \item Must remain full once constructed \end{flushemize} & 100,000 gal & \$86,300\\ \bottomrule \end{tabular} \end{table} Thank you for any advice :)

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >