Search Results

Search found 9542 results on 382 pages for 'row'.

Page 179/382 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • When importing an Access table into Excel, a look-up column is showing all values as numbers

    - by user3651997
    I have a basic Access to Excel question that has me frustrated. I have two Access 2010 data tables. One is a list of managers. The primary key is a manager ID (which is an autonumber because managers can have the same name), and each row also has manager name, manager email, etc. The second data table is a list of departments. The primary key for each row is a unique department code, and the foreign key is a manager ID (autonumber). I used the Look-up Wizard to create this connection. However, Access does not show the manager ID in the foreign key location. It shows Manager Name like I requested when I used the Look-up Wizard. Now I am trying to import the second table (departments) into Excel 2010. I clicked import from Access, chose the Department table, and everything popped into Excel. BUT, the Manager Name column is showing Manager ID instead. So I have a list of numbers instead of names. How can I make Excel show what I see in Access? Thanks!

    Read the article

  • Neophyte question about using Subtotal and CountIf in Excel

    - by Andrew
    Hi, I'm using Excel and having some problems with Countif and I don't understand how it works differently from SubTotal. I used the GUI to subtotal stuff and all the subtotals are right. Then I attempted to use the Countif to see how many requirements passed. That worked for the first subtotal only. It's easy to see why. When I look at the box for the subtotal, it says: =SUBTOTAL(3,C286:C292) When I look at my formula for passed requirements, I have: =IF(ISTEXT(A285),COUNTIF(C286:C338,"=Passed"),"") Notice that the last column is wrong. How did the Subtotal manage to keep this correct? I typed in the formula for passed requirements and dragged it down the page. Everything behaved as expected (even the bit about ISTEXT dutifully figured out which row was which), but it got the last row wrong. Any ideas? SRS Maintenance Count 7 44 SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Reports Count 12 43 SRS Reports Passed SRS Reports Passed SRS Reports Passed SRS Reports Passed SRS Reports Failed SRS Reports Passed SRS Reports Passed SRS Reports Failed SRS Reports Passed SRS Reports Passed SRS Reports Failed

    Read the article

  • Problem with diacritics on psql 9.0 (PostgreSQL)

    - by Gaks
    I have two instances of PostgreSQL installed on my server: 8.3 and 9.0. There seams to be some problem with Polish diacritic characters (like óleaszzc) on postgresql 9.0 client - psql. When I connect to DB (either 8.3 or 9.0) with psql 8.3 - I can type all diacritics on the terminal without any problems: www:/tmp# sudo -u postgres /usr/lib/postgresql/8.3/bin/psql -q postgres=# ólscn However, when I connect to the same DBs with psql 9.0 client - I can't type diacritics on the terminal anymore: www:/tmp# sudo -u postgres /usr/lib/postgresql/8.3/bin/psql -q Here are some encoding settings: www:/tmp# sudo -u postgres /usr/lib/postgresql/9.0/bin/psql -q -c "show client_encoding" client_encoding ----------------- UTF8 (1 row) . www:/tmp# sudo -u postgres /usr/lib/postgresql/8.3/bin/psql -q -c "show client_encoding" client_encoding ----------------- UTF8 (1 row) . www:/tmp# sudo -u postgres /usr/lib/postgresql/9.0/bin/psql -q -l List of databases Name | Owner | Encoding | Collation | Ctype | Access privileges ---------------------+--------------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 | . www:/tmp# echo $LANG pl_PL.UTF-8 It looks like DB/cluster configuration doesn't matter - if psql 8.x on terminal works fine and psql 9.x does not. Any idea how to fix that?

    Read the article

  • Pivot tables: How can I total the subtotal?

    - by Mike
    Person A needs £115, Person D £234 and Person G £789, but how do I SUM that and get it to show on the same ROW as the subtotal? The Rows are subscription names. The Value field holds the Cost per subscription. the Columns holds the name of the person who receives the subscription. I have GROUPED on YEAR & MONTH, and have a subtotal that shows me how much each person will need to pay each month for all their subscriptions, but I need a figure showing me the total of all the subscriptions per month. I've tried adding calculated fields, but I want to SUM the subtotals so I'm struggling to see the field I need to use. I've tried Grand Totals but that SUMS all rows and I really only want SUM the Subtotal Total Row. I need a nice neat report that my managers won't go white at when looking at it...to many numbers = fear and confusion. Anyway it got messy, so I've come for help. Cheers Mike.

    Read the article

  • --log-slave-updates is OFF but updates received from master are still logged to slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the updates that are received from a master to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something?

    Read the article

  • How seriously should I take ECC correctable error warnings?

    - by David Mackintosh
    I have a pile of Sun X2200-M2 servers. These servers have ECC memory. In some of these servers, I am getting warnings in the eLOM about "correctable ECC errors detected", eg: # ssh regress11 ipmitool sel elist 1 | 05/20/2010 | 14:20:27 | Memory CPU0 DIMM2 | Correctable ECC | Asserted 2 | 05/20/2010 | 14:33:47 | Memory CPU0 DIMM2 | Correctable ECC | Asserted ...some more frequently than others. The kernel on this particular system is throwing EDAC errors as well, although with far more frequency than the eLOM is recording ECC events: EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x42a194, offset 0x60, grain 8, syndrome 0xf654, row 4, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x48cb94, offset 0x10, grain 8, syndrome 0xf654, row 5, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error Now if the server is detecting Uncorrectable ECC, the system resets, so clearly that's bad and removing/replacing the identified stick or pair corrects the issue. But I am thinking that if the error is Correctable, then there's no immediate issue -- I can treat this as a warning and be prepared to pull the stick/pair if an uncorrectable error starts occurring?

    Read the article

  • Interactive console based CSV editor

    - by Penguin Nurse
    Although spreadsheet applications for editing CSV files on the console used to be one of the earliest killer applications for personal computers, only few of them and even less documentation about them is still actively maintained. After having done extensive search on the web, manpages and source code, I ended up with the following three applications that all have fundamental drawbacks: sc: abbrev. for spreadsheet calculator; nice tool with vi keybings, but it does not put strings containing the delimiter into quotas when exporting to delimiter separated format and can't import csv files correctly, i.e. all numbers are interpreted as strings GNU oleo: doesn't seem to be actively maintained any longer since 2001 and there are therefore no packages for major linux distributions teapot: offers packages for various operating systems, but uses for example counter-intuitive naming for cells (numbers for row and column, i.e. 11 seems to be intended to be row 1, column 1) and superfluous code for FLTK GUI Various Emacs modes also do not quote strings containing the delimiter well or are require much more typing for entering the scaffold of a table. Therefore I would be very grateful for overcoming one of theses drawbacks or any hints towards another console based CSV editor. It actually needn't do any calculations just editing cells or column- and rowise.

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • Excel 2007 pivot table does not aggregate properly

    - by Patrick
    I am using a an excel pivot table to summarize some data and just found a problem. The problem deals with how aggregate values are calculated. Let's say I have a table of data with three columns: Name, Date, Value. If I create a table where Name and then Date are used as Row Labels and Value is the aggregate value, ie Average. The pivot table will look something like this: +John .3450 5/14/2010 1.234 5/15/2010 3.450 5/16/2010 -3.25 What I think should be happening here is that the values for each date are averaged and then those values are averaged to come up with the value in the same row as the Name, John. But that is not what it does. It takes the average for each date, which it shows across from the date, but then instead of taking the average of those numbers, it actually uses the raw data and computes the average for all of John's values. It should show the average of the daily averages to correspond with the tree hierarchy, but instead just shows me the average for all of John's values. It essential will only aggregate at one level, but visually creates sub levels that it is not using. Does anyone know how to change this or understand by what logic this makes sense? Why would I create any sub groupings if I cannot compute aggregates on them?

    Read the article

  • How to Programmatically Split and Manipulate Rows of Data From Excel

    - by Charlene
    I am hoping one of you will be able to help get me started on this issue. I need to create some sort of macro or VBA code to split and manipulate rows of data in Excel. For this example, we have 5 rows of data. The first 3 rows are item information for Order # 0000000000-00 and the last 2 rows are item information for order # 0000000000-01. I need one row ("HDR") for each order number, and one row ("ITM") for each product per order. I have included an example below showing the data I will receive and the desired outcome. Raw Data: order-id product-num date buyer-name product-name quantity-purchased 0000000000-00 10000000000000 5/29/2014 John Doe Product 0 1 0000000000-00 10000000000001 5/29/2014 John Doe Product 1 2 0000000000-00 10000000000002 5/29/2014 John Doe Product 2 1 0000000000-01 10000000000002 5/30/2014 Jane Doe Product 2 1 0000000000-01 10000000000003 5/30/2014 Jane Doe Product 3 1 Desired Outcome: HDR 0000000000-00 John Doe 5/29/2014 ITM 10000000000000 Product 0 1 ITM 10000000000001 Product 1 2 ITM 10000000000002 Product 2 1 HDR 0000000000-01 Jane Doe 5/30/2014 ITM 10000000000002 Product 2 1 ITM 10000000000003 Product 3 1 Any and all help would be much appreciated!!! Thank you.

    Read the article

  • Blending Background for Polar Distortion in GIMP

    - by Chris S
    I followed a tutorial to perform a polar distortion on a panoramic image. The instructions are geared for Photoshop but seem to mostly apply to GIMP as well. The only thing I couldn't really figure out was how they were able to automatically fill in the area around the circle by "extending" the border of the circle. e.g. In GIMP, performing the polar distortion leaves a black white canvas around the circle, not the attractive blended background shown in the tutorial. Is there an easy way to implement this? The only way I found was to reserve half of the "square" as blank canvas and then manually copy the image's top row of pixels over this empty portion. Then, after the polar distortion, I crop out the extra area. Although this achieves the effect, it seems a bit awkward. How do you stretch selections? Ideally, I just want to select the top row and stretch it vertically until it fills in half of the canvas. Instead I had to manaully copy, paste, translate, etc.

    Read the article

  • My system administrator set up 2 databases that sync. Master-Master. However, these two databases a

    - by Alex
    DB1 and DB2. I made changes to DB1, and it does not seem to be on DB2. When I do "SHOW SLAVE STATUS\G" on DB2, there seems to be an error: mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: Master_User: Master_Port: Connect_Retry: 60 Master_Log_File: mysql-bin.0005496 Read_Master_Log_Pos: 5445649315 Relay_Log_File: mysqld-relay-bin.0041705 Relay_Log_Pos: 1624302119 Relay_Master_Log_File: mysql-bin.0004461 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1062 Last_Error: Error 'Duplicate entry '4779' for key 1' on query. Default database: 'falc'. Query: 'INSERT INTO `log` (`anon_id`, `created_at`, `query`, `episode_url`, `detail_id`, `ip`) VALUES ('fdzn1d45kMavF4qbyePv', '2009-11-19 04:19:13', 'amazon', '', '', '130.126.40.57')' Skip_Counter: 0 Exec_Master_Log_Pos: 162301982 Relay_Log_Space: 136505187184 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) Then, I did show tables, and it seems like DB2 is lacking a table that I created on DB1...that means that for some reason, DB2 stopped syncing with DB1. How can I simply allow them to be in full synchronization again? All I want is DB2 to be exactly the same as DB1!

    Read the article

  • Is it possible to have a conditional formatting cell "visually cycle" through all the formats that evaluated true?

    - by Ben
    Like the title says, "In Excel, when a cell has multiple conditional formatting rules that evaluate true, is it possible to have the cell "visually cycle" through all the formats that evaluated true? If not, suggestions on what to do would be appreciated!" I'm creating an employee schedule for a business that has multiple job areas that need to have an employee assigned to cover. The schedule is currently set up with the date on the top row, employee list down the left column, and the employee's assigned "job area" cross-referencing with the date on the top row. Originally it was set up where if every required "job area" didn't have someone assigned to it, the date would (via conditional formatting) change to red. I've set it up now that if a condition isn't met, the date will change to the color of the "job area" that doesn't have an employee assigned to it. However, there are cases where multiple job areas don't have an employee assigned, but the date will only change color based on the first condition that isn't met. It'd be nice if there was some way for the date cell to cycle through the different colors that correspond to the job areas where no one is assigned. I have a hunch that's not possible though. If it is possible, I'd love to know how to do it. And if it isn't, if anyone has any suggestions on how I can modify the Excel sheet to make it easier to identify the job areas that don't have anyone assigned to them, I would appreciate it. FYI This schedule goes out months in advance.

    Read the article

  • How to include worksheet 3 and 4 in a cell formula provided?

    - by user21255
    I have been kindly given this formula with an explanation on how it works: Insert this formula into the cell B4 of the sheet "Cases": =IF(NOT(ISBLANK('1st'!B25)),'1st'!B25,IF(NOT(ISBLANK(INDIRECT("'2nd'!R" & (ROW($B4)-(COUNTA('1st'!$B:$B)-COUNTA('1st'!$B$1:$B$24))-4+25) & "C" & COLUMN(B4),FALSE))),INDIRECT("'2nd'!R" & (ROW($B4)-(COUNTA('1st'!$B:$B)-COUNTA('1st'!$B$1:$B$24))-4+25) & "C" & COLUMN(B4),FALSE),"")) Copy the formula to the other cells in the worksheet; the relative addresses will adjust automatically. The formula works like this: Check if there is content in 1st. If yes, copy it. If no, find out how many entries there are in 1st in total. (This is done by using the COUNTA function on the whole B column in 1st and subtracting the number of non-empty cells above the actual case data.) Use this information together with the current cells's number to find out the location of the cell that has to be copied from 2nd. Create the address of the cell and use the ISBLANK function on the INDIRECT function with that address to check if the cell is empty. If it is not, use the INDIRECT function again to display it. If it is empty, just display an empty string. Now this works fine when I have only 2 sheets. But lets say I want to include a third and fourth sheet (name as 3rd and 4th respectively), then what and should I put the formula for this in the formula above? There are actually 31 sheets but if I know how to add 3rd and 4th sheet in the formula, then I can figure out how to do the rest. Thanks

    Read the article

  • Formula-based Excel page headers

    - by Jake Krohn
    I'm using the "Rows to repeat at top" function in Excel's "Page Setup" dialog to ensure that a multi-row header block appears on every printed page of my worksheet. However, I'd like to be able to change certain bits of the header based on the content of the current page. I would simply like to display the value of one cell in the first row that is printed on the page. If this is my header: Section: xx And the data looks like this (columns are Section and Name): 1 Foo 1 Bar 2 Baz I want the "xx" in the header to be "1". If, further down on the next page, the value in the Section column is "3", I want that printed in the header of the next page. I originally thought that using the "OFFSET" function might help, e.g. ="Section: "&OFFSET(A2, 1, 0) But it only shows the offset from the original placement of the header, thus only working on page 1. The end document is a PDF, so right now I'm able to go back in with the "TouchUp Text Tool" in Acrobat and add the numbers page by page. But it gets to be a tedious process with 70+ page reports. Anyone have any better ideas that don't require me mucking up the original Excel document with inserted headers every N lines? This is Excel 2008 for Mac, if it makes a difference.

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • Can't get php+sqlite working

    - by facha
    Hi, everyone I'm struggling all morning to make php work with an sqlite database. Here is a piece of php code that I try to execute: #less /var/www/html/test.php <?php $db=new PDO("sqlite:/var/www/test.sql"); $sql = "insert into test (login,pass) values ('login','pass');"; $db->exec($sql); ?> Here is how I've done tests: # sqlite3 /var/www/test.sql sqlite> create table test (login varchar,pass varchar); #chown apache:apache /var/www/test.sql #chmod 644 /var/www/test.sql Here is the stuff that drives me mad: When I execute from command line: #php test.php everything goes well. Sql is being executed and I can see a new row appear in the database. When I execute the same script from a browser - sql is not being executed. I don't get a new row in the database. There are no errors in the apache log file. Please, help

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • Touchpad scroll slow and jumpy

    - by IR
    I have a laptop with a synaptics touchpad running on win7 x64. When i use the scrolling region of the touchpad in some applications, for example in Visual Studio 2008, Notepad and Windows Media Player 12, the scroll is very slow. If i pull the edge of the touchpad slowly the program will scroll one row at a time(regardless of the number of lines-settings in the mouse configuration). If i pull the edge quickly though, the program will instantly jump like 20 rows making it way too fast. In some applications, like Firefox, the scrolling work as expected. Changing the scrollspeed-setting for the touchpad does not help. If you make it slower it doesn't do the 20-row jump but then it's horribly slow and if you try to make it faster it will do the jumps all the time. I have tried both synaptics generic drivers and the "special" drivers that HP provides but they both have the same problem (except that the generic one can't adjust the scrolling speed, even though that didn't help anyway). With windows generic drivers the scrolling region doesn't even work. Other mice i've tried with scrollwheel work as they should do.

    Read the article

  • Hungry hungry BIOS: why do I have less than 4 GiB of memory?

    - by Rhymoid
    I thought I had 4 GiB of memory, but just to be sure, let's ask the BIOS about that: ?: sudo dmidecode --type 20 # dmidecode 2.12 SMBIOS 2.6 present. Handle 0x000B, DMI type 20, 19 bytes Memory Device Mapped Address Starting Address: 0x00000000000 Ending Address: 0x0007FFFFFFF Range Size: 2 GB Physical Device Handle: 0x000A Memory Array Mapped Address Handle: 0x000E Partition Row Position: Unknown Interleave Position: Unknown Interleaved Data Depth: Unknown Handle 0x000D, DMI type 20, 19 bytes Memory Device Mapped Address Starting Address: 0x00080000000 Ending Address: 0x000FFFFFFFF Range Size: 2 GB Physical Device Handle: 0x000C Memory Array Mapped Address Handle: 0x000E Partition Row Position: Unknown Interleave Position: Unknown Interleaved Data Depth: Unknown Alright, 4 GiB it is. But I can't use all of it: ?: cat /proc/meminfo | head -n 1 MemTotal: 3913452 kB Somehow, somewhere, I lost 274 MiB. Where did 6% of my memory go? Now I know the address ranges in DMI are incorrect, because the ACPI memory map reports usable ranges well beyond the ending address of the second memory module: ?: dmesg | grep -E "BIOS-e820: .* usable" [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009e7ff] usable [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000dee7bfff] usable [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000117ffffff] usable I get pretty much the same info from /proc/iomem (except for the 4 kiB hole 0x000-0xFFF), which also shows that the kernel only accounts for less than 8 MiB. I guess 0x00000000-0x7FFFFFFF is indeed mapped to the first memory module, and 0x80000000-0xDFFFFFFF to part of the second memory module (a bunch of ACPI NVS things live between 0xDEE7C000 and 0xDEF30FFF, and the remaining 16-something MiB of that range are just 'reserved'). I guess the highest 0x18000000 bytes of the second memory module are mapped above the 4 GiB mark. But even then, there are two problems: 128 MiB (0x08000000 bytes, living somewhere between 0xE0000000 and 0xFFFFFFFF) are still completely unaccounted for. To note, my graphics card is on PCI-Express and (allegedly) has 1 GiB dedicated memory, so that shouldn't be the culprit. Did the BIOS screw up in moving the memory, leaving it partially shadowed by MMIO? Even with this mediocre explanation, I only 'found' 128 MiB. But /proc/meminfo is reporting a much larger deficit; where's the other 146 MiB? How does Linux count MemTotal?

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • Compare cells in two different spreadsheets and extract data from one an place it in the other if match found

    - by Fergie
    I need to find a way to compare two spreadsheets and if there is a match on specific cells, pull data from one sheet to another. Say the two spreadsheets contain a value that identifies a piece of equipment: spreadsheet 1 spreadsheet 2 Server Server Serial # 123abc 123abc 123-xx-456 There are of course many, many records/rows in each sheet. I need to look at the first cell in the server column of sheet 1 and then search a range of cells in the sever column of sheet 2 for a match. If there is a match, I need to pull the serial # value from the cell in the matching row an put it into the serial # cell of the matching row in sheet 1 (all of the "serial #" cells in sheet 1 are presently empty.) If that description explaination is too convoluted I can explain by answering any questions you may have. My deadline for this task is Noon tomorrow, 30 Aug 2012. Yes, I got the task today at noon.... I am not an Excel user and just get thrust into it on occassion... Any help would be a huge assist.

    Read the article

  • Error codes for C++

    - by billy
    #include <iostream> #include <iomanip> using namespace std; //Global constant variable declaration const int MaxRows = 8, MaxCols = 10, SEED = 10325; //Functions Declaration void PrintNameHeader(ostream& out); void Fill2DArray(double ary[][MaxCols]); void Print2DArray(const double ary[][MaxCols]); double GetTotal(const double ary[][MaxCols]); double GetAverage(const double ary[][MaxCols]); double GetRowTotal(const double ary[][MaxCols], int theRow); double GetColumnTotal(const double ary[][MaxCols], int theRow); double GetHighestInRow(const double ary[][MaxCols], int theRow); double GetLowestInRow(const double ary[][MaxCols], int theRow); double GetHighestInCol(const double ary[][MaxCols], int theCol); double GetLowestInCol(const double ary[][MaxCols], int theCol); double GetHighest(const double ary[][MaxCols], int& theRow, int& theCol); double GetLowest(const double ary[][MaxCols], int& theRow, int& theCol); int main() { int theRow; int theCol; PrintNameHeader(cout); cout << fixed << showpoint << setprecision(1); srand(static_cast<unsigned int>(SEED)); double ary[MaxRows][MaxCols]; cout << "The seed value for random number generator is: " << SEED << endl; cout << endl; Fill2DArray(ary); Print2DArray(ary); cout << " The Total for all the elements in this array is: " << setw(7) << GetTotal(ary) << endl; cout << "The Average of all the elements in this array is: " << setw(7) << GetAverage(ary) << endl; cout << endl; cout << "The sum of each row is:" << endl; for(int index = 0; index < MaxRows; index++) { cout << "Row " << (index + 1) << ": " << GetRowTotal(ary, theRow) << endl; } cout << "The highest and lowest of each row is: " << endl; for(int index = 0; index < MaxCols; index++) { cout << "Row " << (index + 1) << ": " << GetHighestInRow(ary, theRow) << " " << GetLowestInRow(ary, theRow) << endl; } cout << "The highest and lowest of each column is: " << endl; for(int index = 0; index < MaxCols; index++) { cout << "Col " << (index + 1) << ": " << GetHighestInCol(ary, theRow) << " " << GetLowestInCol(ary, theRow) << endl; } cout << "The highest value in all the elements in this array is: " << endl; cout << GetHighest(ary, theRow, theCol) << "[" << theRow << "]" << "[" << theCol << "]" << endl; cout << "The lowest value in all the elements in this array is: " << endl; cout << GetLowest(ary, theRow, theCol) << "[" << theRow << "]" << "[" << theCol << "]" << endl; return 0; } //Define Functions void PrintNameHeader(ostream& out) { out << "*******************************" << endl; out << "* *" << endl; out << "* C.S M10A Spring 2010 *" << endl; out << "* Programming Assignment 10 *" << endl; out << "* Due Date: Thurs. Mar. 25 *" << endl; out << "*******************************" << endl; out << endl; } void Fill2DArray(double ary[][MaxCols]) { for(int index1 = 0; index1 < MaxRows; index1++) { for(int index2= 0; index2 < MaxCols; index2++) { ary[index1][index2] = (rand()%1000)/10; } } } void Print2DArray(const double ary[][MaxCols]) { cout << " Column "; for(int index = 0; index < MaxCols; index++) { int column = index + 1; cout << " " << column << " "; } cout << endl; cout << " "; for(int index = 0; index < MaxCols; index++) { int column = index +1; cout << "----- "; } cout << endl; for(int index1 = 0; index1 < MaxRows; index1++) { cout << "Row " << (index1 + 1) << ":"; for(int index2= 0; index2 < MaxCols; index2++) { cout << setw(6) << ary[index1][index2]; } } } double GetTotal(const double ary[][MaxCols]) { double total = 0; for(int theRow = 0; theRow < MaxRows; theRow++) { total = total + GetRowTotal(ary, theRow); } return total; } double GetAverage(const double ary[][MaxCols]) { double total = 0, average = 0; total = GetTotal(ary); average = total / (MaxRows * MaxCols); return average; } double GetRowTotal(const double ary[][MaxCols], int theRow) { double sum = 0; for(int index = 0; index < MaxCols; index++) { sum = sum + ary[theRow][index]; } return sum; } double GetColumTotal(const double ary[][MaxCols], int theCol) { double sum = 0; for(int index = 0; index < theCol; index++) { sum = sum + ary[index][theCol]; } return sum; } double GetHighestInRow(const double ary[][MaxCols], int theRow) { double highest = 0; for(int index = 0; index < MaxCols; index++) { if(ary[theRow][index] > highest) highest = ary[theRow][index]; } return highest; } double GetLowestInRow(const double ary[][MaxCols], int theRow) { double lowest = 0; for(int index = 0; index < MaxCols; index++) { if(ary[theRow][index] < lowest) lowest = ary[theRow][index]; } return lowest; } double GetHighestInCol(const double ary[][MaxCols], int theCol) { double highest = 0; for(int index = 0; index < MaxRows; index++) { if(ary[index][theCol] > highest) highest = ary[index][theCol]; } return highest; } double GetLowestInCol(const double ary[][MaxCols], int theCol) { double lowest = 0; for(int index = 0; index < MaxRows; index++) { if(ary[index][theCol] < lowest) lowest = ary[index][theCol]; } return lowest; } double GetHighest(const double ary[][MaxCols], int& theRow, int& theCol) { theRow = 0; theCol = 0; double highest = ary[theRow][theCol]; for(int index = 0; index < MaxRows; index++) { for(int index1 = 0; index1 < MaxCols; index1++) { double highest = 0; if(ary[index1][theCol] > highest) { highest = ary[index][index1]; theRow = index; theCol = index1; } } } return highest; } double Getlowest(const double ary[][MaxCols], int& theRow, int& theCol) { theRow = 0; theCol = 0; double lowest = ary[theRow][theCol]; for(int index = 0; index < MaxRows; index++) { for(int index1 = 0; index1 < MaxCols; index1++) { double lowest = 0; if(ary[index1][theCol] < lowest) { lowest = ary[index][index1]; theRow = index; theCol = index1; } } } return lowest; } . 1>------ Build started: Project: teddy lab 10, Configuration: Debug Win32 ------ 1>Compiling... 1>lab 10.cpp 1>c:\users\owner\documents\visual studio 2008\projects\teddy lab 10\teddy lab 10\ lab 10.cpp(46) : warning C4700: uninitialized local variable 'theRow' used 1>c:\users\owner\documents\visual studio 2008\projects\teddy lab 10\teddy lab 10\ lab 10.cpp(62) : warning C4700: uninitialized local variable 'theCol' used 1>Linking... 1> lab 10.obj : error LNK2028: unresolved token (0A0002E0) "double __cdecl GetLowest(double const (* const)[10],int &,int &)" (?GetLowest@@$$FYANQAY09$$CBNAAH1@Z) referenced in function "int __cdecl main(void)" (?main@@$$HYAHXZ) 1> lab 10.obj : error LNK2019: unresolved external symbol "double __cdecl GetLowest(double const (* const)[10],int &,int &)" (?GetLowest@@$$FYANQAY09$$CBNAAH1@Z) referenced in function "int __cdecl main(void)" (?main@@$$HYAHXZ) 1>C:\Users\owner\Documents\Visual Studio 2008\Projects\ lab 10\Debug\ lab 10.exe : fatal error LNK1120: 2 unresolved externals 1>Build log was saved at "file://c:\Users\owner\Documents\Visual Studio 2008\Projects\ lab 10\teddy lab 10\Debug\BuildLog.htm" 1>teddy lab 10 - 3 error(s), 2 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • Simple Self Join Query Bad Performance

    - by user1514042
    Could anyone advice on how do I improve the performance of the following query. Note, the problem seems to be caused by where clause. Data (table contains a huge set of rows - 500K+, the set of parameters it's called with assums the return of 2-5K records per query, which takes 8-10 minutes currently): USE [SomeDb] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Data]( [x] [money] NOT NULL, [y] [money] NOT NULL, CONSTRAINT [PK_Data] PRIMARY KEY CLUSTERED ( [x] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO The Query select top 10000 s.x as sx, e.x as ex, s.y as sy, e.y as ey, e.y - s.y as y_delta, e.x - s.x as x_delta from Data s inner join Data e on e.x > s.x and e.x - s.x between xFrom and xTo --where e.y - s.y > @yDelta -- when uncommented causes a huge delay Update 1 - Execution Plan <?xml version="1.0" encoding="utf-16"?> <ShowPlanXML xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Version="1.2" Build="11.0.2100.60" xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan"> <BatchSequence> <Batch> <Statements> <StmtSimple StatementCompId="1" StatementEstRows="100" StatementId="1" StatementOptmLevel="FULL" StatementOptmEarlyAbortReason="GoodEnoughPlanFound" StatementSubTreeCost="0.0263655" StatementText="select top 100&#xD;&#xA;s.x as sx,&#xD;&#xA;e.x as ex,&#xD;&#xA;s.y as sy,&#xD;&#xA;e.y as ey,&#xD;&#xA;e.y - s.y as y_delta,&#xD;&#xA;e.x - s.x as x_delta&#xD;&#xA;from Data s &#xD;&#xA; inner join Data e&#xD;&#xA; on e.x &gt; s.x and e.x - s.x between 100 and 105&#xD;&#xA;where e.y - s.y &gt; 0.01&#xD;&#xA;" StatementType="SELECT" QueryHash="0xAAAC02AC2D78CB56" QueryPlanHash="0x747994153CB2D637" RetrievedFromCache="true"> <StatementSetOptions ANSI_NULLS="true" ANSI_PADDING="true" ANSI_WARNINGS="true" ARITHABORT="true" CONCAT_NULL_YIELDS_NULL="true" NUMERIC_ROUNDABORT="false" QUOTED_IDENTIFIER="true" /> <QueryPlan DegreeOfParallelism="0" NonParallelPlanReason="NoParallelPlansInDesktopOrExpressEdition" CachedPlanSize="24" CompileTime="13" CompileCPU="13" CompileMemory="424"> <MemoryGrantInfo SerialRequiredMemory="0" SerialDesiredMemory="0" /> <OptimizerHardwareDependentProperties EstimatedAvailableMemoryGrant="52199" EstimatedPagesCached="14561" EstimatedAvailableDegreeOfParallelism="4" /> <RelOp AvgRowSize="55" EstimateCPU="1E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Compute Scalar" NodeId="0" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="0.0263655"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> <ColumnReference Column="Expr1004" /> <ColumnReference Column="Expr1005" /> </OutputList> <ComputeScalar> <DefinedValues> <DefinedValue> <ColumnReference Column="Expr1004" /> <ScalarOperator ScalarString="[SomeDb].[dbo].[Data].[y] as [e].[y]-[SomeDb].[dbo].[Data].[y] as [s].[y]"> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> </DefinedValue> <DefinedValue> <ColumnReference Column="Expr1005" /> <ScalarOperator ScalarString="[SomeDb].[dbo].[Data].[x] as [e].[x]-[SomeDb].[dbo].[Data].[x] as [s].[x]"> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> </DefinedValue> </DefinedValues> <RelOp AvgRowSize="39" EstimateCPU="1E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Top" NodeId="1" Parallel="false" PhysicalOp="Top" EstimatedTotalSubtreeCost="0.0263555"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="100" ActualEndOfScans="1" ActualExecutions="1" /> </RunTimeInformation> <Top RowCount="false" IsPercent="false" WithTies="false"> <TopExpression> <ScalarOperator ScalarString="(100)"> <Const ConstValue="(100)" /> </ScalarOperator> </TopExpression> <RelOp AvgRowSize="39" EstimateCPU="151828" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Inner Join" NodeId="2" Parallel="false" PhysicalOp="Nested Loops" EstimatedTotalSubtreeCost="0.0263455"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="100" ActualEndOfScans="0" ActualExecutions="1" /> </RunTimeInformation> <NestedLoops Optimized="false"> <OuterReferences> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OuterReferences> <RelOp AvgRowSize="23" EstimateCPU="1.80448" EstimateIO="3.76461" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="1" LogicalOp="Clustered Index Scan" NodeId="3" Parallel="false" PhysicalOp="Clustered Index Scan" EstimatedTotalSubtreeCost="0.0032831" TableCardinality="1640290"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="15225" ActualEndOfScans="0" ActualExecutions="1" /> </RunTimeInformation> <IndexScan Ordered="false" ForcedIndex="false" ForceScan="false" NoExpandHint="false"> <DefinedValues> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </DefinedValue> </DefinedValues> <Object Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Index="[PK_Data]" Alias="[e]" IndexKind="Clustered" /> </IndexScan> </RelOp> <RelOp AvgRowSize="23" EstimateCPU="0.902317" EstimateIO="1.88387" EstimateRebinds="1" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Clustered Index Seek" NodeId="4" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="0.0263655" TableCardinality="1640290"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="100" ActualEndOfScans="15224" ActualExecutions="15225" /> </RunTimeInformation> <IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" ForceSeek="false" ForceScan="false" NoExpandHint="false" Storage="RowStore"> <DefinedValues> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </DefinedValue> </DefinedValues> <Object Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Index="[PK_Data]" Alias="[s]" IndexKind="Clustered" /> <SeekPredicates> <SeekPredicateNew> <SeekKeys> <EndRange ScanType="LT"> <RangeColumns> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[SomeDb].[dbo].[Data].[x] as [e].[x]"> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> </RangeExpressions> </EndRange> </SeekKeys> </SeekPredicateNew> </SeekPredicates> <Predicate> <ScalarOperator ScalarString="([SomeDb].[dbo].[Data].[x] as [e].[x]-[SomeDb].[dbo].[Data].[x] as [s].[x])&gt;=($100.0000) AND ([SomeDb].[dbo].[Data].[x] as [e].[x]-[SomeDb].[dbo].[Data].[x] as [s].[x])&lt;=($105.0000) AND ([SomeDb].[dbo].[Data].[y] as [e].[y]-[SomeDb].[dbo].[Data].[y] as [s].[y])&gt;(0.01)"> <Logical Operation="AND"> <ScalarOperator> <Compare CompareOp="GE"> <ScalarOperator> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> <ScalarOperator> <Const ConstValue="($100.0000)" /> </ScalarOperator> </Compare> </ScalarOperator> <ScalarOperator> <Compare CompareOp="LE"> <ScalarOperator> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> <ScalarOperator> <Const ConstValue="($105.0000)" /> </ScalarOperator> </Compare> </ScalarOperator> <ScalarOperator> <Compare CompareOp="GT"> <ScalarOperator> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> <ScalarOperator> <Const ConstValue="(0.01)" /> </ScalarOperator> </Compare> </ScalarOperator> </Logical> </ScalarOperator> </Predicate> </IndexScan> </RelOp> </NestedLoops> </RelOp> </Top> </RelOp> </ComputeScalar> </RelOp> </QueryPlan> </StmtSimple> </Statements> </Batch> </BatchSequence> </ShowPlanXML>

    Read the article

  • Converting a WPFToolkit DataGrid from 1D list to 2D matrix

    - by user61073
    Hello - I am wondering if anyone has attempted the following or has an idea as to how to do it. I have a WPFToolkit DataGrid which is bound to an ObservableCollection of items. As such, the DataGrid is shown with as many rows in the ObservableCollection, and as many columns as I have defined in for the DataGrid. That all is good. What I now need is to provide another view of the same data, only, instead, the DataGrid is shown with as many cells in the ObservableCollection. So let's say, my ObservableCollection has 100 items in it. The original scenario showed the DataGrid with 100 rows and 1 column. In the modified scenario, I need to show it with 10 rows and 10 columns, where each cell shows the value that was in the original representation. In other words, I need to transform my 1D ObservableCollection to a 2D ObservableCollection and display it in the DataGrid. I know how to do that programmatically in the code behind, but can it be done in XAML? Let me simplify the problem a little, in case anybody can have a crack at this. The XAML below does the following: * Defines an XmlDataProvider just for dummy data * Creates a DataGrid with 10 columns o each column is a DataGridTemplateColumn using the same CellTemplate * The CellTemplate is a simple TextBlock bound to an XML element If you run the XAML below, you will find that the DataGrid ends up with 5 rows, one for each book, and 10 columns that have identical content (all showing the book titles). However, what I am trying to accomplish, albeit with a different data set, is that in this case, I would end up with one row, with each book title appearing in a single cell in row 1, occupying cells 0-4, and nothing in cells 5-9. Then, if I added more data and had 12 books in my XML data source, I would get row 1 completely filled (cells covering the first 10 titles) and row 2 would get the first 2 cells filled. Can my scenario be accomplished primarily in XAML, or should I resign myself to working in the code behind? Any guidance would greatly be appreciated. Thanks so much! <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:custom="http://schemas.microsoft.com/wpf/2008/toolkit" mc:Ignorable="d" x:Name="UserControl" d:DesignWidth="600" d:DesignHeight="400" > <UserControl.Resources> <XmlDataProvider x:Key="InventoryData" XPath="Inventory/Books"> <x:XData> <Inventory xmlns=""> <Books> <Book ISBN="0-7356-0562-9" Stock="in" Number="9"> <Title>XML in Action</Title> <Summary>XML Web Technology</Summary> </Book> <Book ISBN="0-7356-1370-2" Stock="in" Number="8"> <Title>Programming Microsoft Windows With C#</Title> <Summary>C# Programming using the .NET Framework</Summary> </Book> <Book ISBN="0-7356-1288-9" Stock="out" Number="7"> <Title>Inside C#</Title> <Summary>C# Language Programming</Summary> </Book> <Book ISBN="0-7356-1377-X" Stock="in" Number="5"> <Title>Introducing Microsoft .NET</Title> <Summary>Overview of .NET Technology</Summary> </Book> <Book ISBN="0-7356-1448-2" Stock="out" Number="4"> <Title>Microsoft C# Language Specifications</Title> <Summary>The C# language definition</Summary> </Book> </Books> <CDs> <CD Stock="in" Number="3"> <Title>Classical Collection</Title> <Summary>Classical Music</Summary> </CD> <CD Stock="out" Number="9"> <Title>Jazz Collection</Title> <Summary>Jazz Music</Summary> </CD> </CDs> </Inventory> </x:XData> </XmlDataProvider> <DataTemplate x:Key="GridCellTemplate"> <TextBlock> <TextBlock.Text> <Binding XPath="Title"/> </TextBlock.Text> </TextBlock> </DataTemplate> </UserControl.Resources> <Grid x:Name="LayoutRoot"> <custom:DataGrid HorizontalAlignment="Stretch" VerticalAlignment="Stretch" IsSynchronizedWithCurrentItem="True" Background="{DynamicResource WindowBackgroundBrush}" HeadersVisibility="All" RowDetailsVisibilityMode="Collapsed" SelectionUnit="CellOrRowHeader" CanUserResizeRows="False" GridLinesVisibility="None" RowHeaderWidth="35" AutoGenerateColumns="False" CanUserReorderColumns="False" CanUserSortColumns="False"> <custom:DataGrid.Columns> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="01" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="02" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="03" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="04" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="05" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="06" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="07" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="08" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="09" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="10" /> </custom:DataGrid.Columns> <custom:DataGrid.ItemsSource> <Binding Source="{StaticResource InventoryData}" XPath="Book"/> </custom:DataGrid.ItemsSource> </custom:DataGrid> </Grid>

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >