Search Results

Search found 22078 results on 884 pages for 'composite primary key'.

Page 174/884 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • reading keyboard input without "consuming" it in x86 assembly

    - by Bob
    Ok so here the deal: I have an assignment regarding DOS and interrupts: I need to write a sort of key-logger function that can be called from C code. what it means is a c program would call an assembly function called startlog that would indicate to start logging the keys pressed until a function called endlog is called. the logging should work like this: write the ascii value of any key pressed without disturbing the C code between startlog and endlog, meaning that if the C code also needs to read the input (let's say by scanf, it would work ok). I managed to write the logger by changing the interrupt vector 9th entry (interrupt for keyboard press) to a function I wrote that writes the values to a file, and it works fine. however the C code does not get the input. Basically what i did is read the key pressed using int 21h, however after reading the ascii value it is "consumed" so I need a way to either simulate the key press again or read the value without "consuming" it so next time a key is read it reads the same key. (I described the code in english because it is long and clumsy assembly code)

    Read the article

  • How to make smooth movements in OpenGL?

    - by thyrgle
    So this kind of on topic to my other OpenGL question (not my OpenGL ES question but OpenGL the desktop version). If you have someone press a key to move a square how do you make the square movement naturally and less jumpy but also at the same speed I have it now? This is my code for the glutKeyboardFunc() function: void handleKeypress(unsigned char key, int x, int y) { if (key == 'w') { for (int i = 0; i < 12; i++) { if (i == 1 || i == 7 || i == 10 || i == 4) { square[i] = square[i] + 1; } } glutPostRedisplay(); } if (key == 'd') { for (int i = 0; i < 12; i++) { if (i == 0 || i % 3 == 0) { square[i] = square[i] + 1; } } glutPostRedisplay(); } if (key == 's') { for (int i = 0; i < 12; i++) { if (i == 1 || i == 7 || i == 10 || i == 4) { square[i] = square[i] - 1; } } glutPostRedisplay(); } if (key == 'a') { for (int i = 0; i < 12; i++) { if (i == 0 || i % 3 == 0) { square[i] = square[i] - 1; } } glutPostRedisplay(); } } I'm sorry if this doesn't quite make sense I'll try to rephrase it in a better way if it doesn't make sense.

    Read the article

  • Gnome 3 freezes on logon on samsung RV 509

    - by Noufal
    I have a Samsung NP-RV509 A0FIN and I tried to install GNU/Linux with gnome 3.2 on it. I tried Fedora 16, Ubuntu 11.10 and Linux Mint 12 RC, but with no success. All of these freezes upon login into gnome shell. I think it is the problem with graphics driver, so I tried xorg-edgers ppa on my last installation, ie., Linux Mint. I also tried various intel graphics packages listed on Synaptic package manager, but no success again. My device configuration is as follows(obtained from windows 7): More details about my computer Component Details Subscore Base score Processor Intel(R) Pentium(R) CPU P6200 @ 2.13GHz 5.6 4.6 Memory (RAM) 4.00 GB 7.2 Graphics Intel(R) HD Graphics 4.6 Gaming graphics 1562 MB Total available graphics memory 5.2 Primary hard disk 12GB Free (50GB Total) 5.9 Windows 7 Ultimate System -------------------------------------------------------------------------------- Manufacturer SAMSUNG ELECTRONICS CO., LTD. Model RV409/RV509/RV709 Total amount of system memory 4.00 GB RAM System type 32-bit operating system Number of processor cores 2 64-bit capable Yes Storage -------------------------------------------------------------------------------- Total size of hard disk(s) 418 GB Disk partition (C:) 12 GB Free (50 GB Total) Media drive (D:) CD/DVD Disk partition (E:) 526 MB Free (191 GB Total) Disk partition (F:) 101 GB Free (177 GB Total) Graphics -------------------------------------------------------------------------------- Display adapter type Intel(R) HD Graphics Total available graphics memory 1562 MB Dedicated graphics memory 64 MB Dedicated system memory 0 MB Shared system memory 1498 MB Display adapter driver version 8.15.10.2202 Primary monitor resolution 1366x768 DirectX version DirectX 10 Network -------------------------------------------------------------------------------- Network Adapter Realtek PCIe GBE Family Controller Network Adapter Broadcom 802.11n Network Adapter Network Adapter Microsoft Virtual WiFi Miniport Adapter Notes -------------------------------------------------------------------------------- The gaming graphics score is based on the primary graphics adapter. If this system has linked or multiple graphics adapters, some software applications may see additional performance benefits. Any help is appreciated, and thanks in advance.

    Read the article

  • Log shipping and shrinking transaction logs

    - by DavidWimbush
    I just solved a problem that had me worried for a bit. I'm log shipping from three primary servers to a single secondary server, and the transaction log disk on the secondary server was getting very full. I established that several primary databases had unused space that resulted from big, one-off updates so I could shrink their logs. But would this action be log shipped and applied to the secondary database too? I thought probably not. And, more importantly, would it break log shipping? My secondary databases are in a Standby / Read Only state so I didn't think I could shrink their logs. I RTFMd, Googled, and asked on a Q&A site (not the evil one) but was none the wiser. So I was facing a monumental round of shrink, full backup, full secondary restore and re-start log shipping (which would leave us without a disaster recovery facility for the duration). Then I thought it might be worthwhile to take a non-essential database and just make absolutely sure a log shrink on the primary wouldn't ship over and occur on the secondary as well. So I did a DBCC SHRINKFILE and kept an eye on the secondary. Bingo! Log shipping didn't blink and the log on the secondary shrank too. I just love it when something turns out even better than I dared to hope. (And I guess this highlights something I need to learn about what activities are logged.)

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Why is my Internet connection randomly dropping?

    - by Jeanno
    Ever since I have installed 12.04 (clean install not an upgrade), i have been having a drop in the Internet connection. The drop in the connection can be anything from 15 seconds to about 3 mins, and then the connection comes back. This behaviour happens while I am actively browsing the Internet, or if I wake up the computer and open Firefox (sometimes I have connection and sometimes I don't) . Please note that when the internet connection is on, it is not slow (as speedtest.net results show) In the beginning, I thought it was a problem with the driver r8169 for my RTL8111/8168B Ethernet card, so I downloaded the r8168 from Realtek website, followed the detailed instructions (blacklisted r8169, changed the file to '.bsh' ...), but still the same problem persisted. So I switched to a wireless connection, and I got the same problem with internet connection dropping randomly. Any ideas? Thanks in advance Output from 'lspci -v' Code: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) Subsystem: Dell Device 04a7 Flags: bus master, fast devsel, latency 0 Capabilities: [e0] Vendor Specific Information: Len=0c <?> 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 0000e000-0000efff Memory behind bridge: f8000000-fa0fffff Prefetchable memory behind bridge: 00000000d0000000-00000000dbffffff Capabilities: [88] Subsystem: Dell Device 04a7 Capabilities: [80] Power Management version 3 Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [a0] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Capabilities: [140] Root Complex Link Kernel driver in use: pcieport Kernel modules: shpchp 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: f4000000-f60fffff Prefetchable memory behind bridge: 00000000c0000000-00000000cbffffff Capabilities: [88] Subsystem: Dell Device 04a7 Capabilities: [80] Power Management version 3 Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [a0] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Capabilities: [140] Root Complex Link Kernel driver in use: pcieport Kernel modules: shpchp 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) Subsystem: Dell Device 04a7 Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at f6108000 (64-bit, non-prefetchable) [size=16] Capabilities: [50] Power Management version 3 Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable- 64bit+ Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) (prog-if 20 [EHCI]) Subsystem: Dell Device 04a7 Flags: bus master, medium devsel, latency 0, IRQ 16 Memory at f6107000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port: BAR=1 offset=00a0 Capabilities: [98] PCI Advanced Features Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Dell Device 04a7 Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at f6100000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [130] Root Complex Link Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=0 Memory behind bridge: fa400000-fa4fffff Capabilities: [40] Express Root Port (Slot+), MSI 00 Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [90] Subsystem: Dell Device 04a7 Capabilities: [a0] Power Management version 2 Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=04, subordinate=04, sec-latency=0 I/O behind bridge: 0000c000-0000cfff Prefetchable memory behind bridge: 00000000dc100000-00000000dc1fffff Capabilities: [40] Express Root Port (Slot+), MSI 00 Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [90] Subsystem: Dell Device 04a7 Capabilities: [a0] Power Management version 2 Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=05, subordinate=05, sec-latency=0 I/O behind bridge: 0000b000-0000bfff Memory behind bridge: fa300000-fa3fffff Capabilities: [40] Express Root Port (Slot+), MSI 00 Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [90] Subsystem: Dell Device 04a7 Capabilities: [a0] Power Management version 2 Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=06, subordinate=06, sec-latency=0 I/O behind bridge: 0000a000-0000afff Memory behind bridge: fa200000-fa2fffff Capabilities: [40] Express Root Port (Slot+), MSI 00 Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [90] Subsystem: Dell Device 04a7 Capabilities: [a0] Power Management version 2 Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) (prog-if 20 [EHCI]) Subsystem: Dell Device 04a7 Flags: bus master, medium devsel, latency 0, IRQ 23 Memory at f6106000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port: BAR=1 offset=00a0 Capabilities: [98] PCI Advanced Features Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge: Intel Corporation P67 Express Chipset Family LPC Controller (rev 05) Subsystem: Dell Device 04a7 Flags: bus master, medium devsel, latency 0 Capabilities: [e0] Vendor Specific Information: Len=0c <?> Kernel modules: iTCO_wdt 00:1f.2 RAID bus controller: Intel Corporation 82801 SATA Controller [RAID mode] (rev 05) Subsystem: Dell Device 04a7 Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 42 I/O ports at f070 [size=8] I/O ports at f060 [size=4] I/O ports at f050 [size=8] I/O ports at f040 [size=4] I/O ports at f020 [size=32] Memory at f6105000 (32-bit, non-prefetchable) [size=2K] Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [70] Power Management version 3 Capabilities: [a8] SATA HBA v1.0 Capabilities: [b0] PCI Advanced Features Kernel driver in use: ahci 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) Subsystem: Dell Device 04a7 Flags: medium devsel, IRQ 5 Memory at f6104000 (64-bit, non-prefetchable) [size=256] I/O ports at f000 [size=32] Kernel modules: i2c-i801 01:00.0 VGA compatible controller: NVIDIA Corporation Device 0dc5 (rev a1) (prog-if 00 [VGA controller]) Subsystem: NVIDIA Corporation Device 085b Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at f8000000 (32-bit, non-prefetchable) [size=16M] Memory at d0000000 (64-bit, prefetchable) [size=128M] Memory at d8000000 (64-bit, prefetchable) [size=32M] I/O ports at e000 [size=128] Expansion ROM at fa000000 [disabled] [size=512K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [b4] Vendor Specific Information: Len=14 <?> Capabilities: [100] Virtual Channel Capabilities: [128] Power Budgeting <?> Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Kernel driver in use: nouveau Kernel modules: nouveau, nvidiafb 01:00.1 Audio device: NVIDIA Corporation GF106 High Definition Audio Controller (rev a1) Subsystem: NVIDIA Corporation Device 085b Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at fa080000 (32-bit, non-prefetchable) [size=16K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 02:00.0 VGA compatible controller: NVIDIA Corporation Device 0dc5 (rev a1) (prog-if 00 [VGA controller]) Subsystem: NVIDIA Corporation Device 085b Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f4000000 (32-bit, non-prefetchable) [size=32M] Memory at c0000000 (64-bit, prefetchable) [size=128M] Memory at c8000000 (64-bit, prefetchable) [size=64M] I/O ports at d000 [size=128] Expansion ROM at f6000000 [disabled] [size=512K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [b4] Vendor Specific Information: Len=14 <?> Capabilities: [100] Virtual Channel Capabilities: [128] Power Budgeting <?> Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Kernel driver in use: nouveau Kernel modules: nouveau, nvidiafb 02:00.1 Audio device: NVIDIA Corporation GF106 High Definition Audio Controller (rev a1) Subsystem: NVIDIA Corporation Device 085b Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at f6080000 (32-bit, non-prefetchable) [size=16K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 03:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI]) Subsystem: Dell Device 04a7 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fa400000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable+ Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] Latency Tolerance Reporting Kernel driver in use: xhci_hcd 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) Subsystem: Dell Device 04a7 Flags: bus master, fast devsel, latency 0, IRQ 51 I/O ports at c000 [size=256] Memory at dc104000 (64-bit, prefetchable) [size=4K] Memory at dc100000 (64-bit, prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [b0] MSI-X: Enable- Count=4 Masked- Capabilities: [d0] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 03-00-00-00-68-4c-e0-00 Kernel driver in use: r8168 Kernel modules: r8168 05:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6315 Series Firewire Controller (rev 01) (prog-if 10 [OHCI]) Subsystem: Dell Device 04a7 Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at fa300000 (64-bit, non-prefetchable) [size=2K] I/O ports at b000 [size=256] Capabilities: [50] Power Management version 3 Capabilities: [80] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [98] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [130] Device Serial Number 00-10-dc-ff-ff-cf-56-1a Kernel driver in use: firewire_ohci Kernel modules: firewire-ohci 06:00.0 SATA controller: JMicron Technology Corp. JMB362 SATA Controller (rev 10) (prog-if 01 [AHCI 1.0]) Subsystem: Dell Device 04a7 Flags: bus master, fast devsel, latency 0, IRQ 19 I/O ports at a040 [size=8] I/O ports at a030 [size=4] I/O ports at a020 [size=8] I/O ports at a010 [size=4] I/O ports at a000 [size=16] Memory at fa210000 (32-bit, non-prefetchable) [size=512] Capabilities: [8c] Power Management version 3 Capabilities: [50] Express Legacy Endpoint, MSI 00 Kernel driver in use: ahci Note that my wireless card is not showing, I have the Ralink 3390 card (which apparently does not show up on Ubuntu for some reason), however I am able to connect to wireless network and connect to the internet (when it is working)

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • Difficulty Mounting Volumes on a Partitioned External HD

    - by Todd
    I'm having a great deal of difficulty with an external hard drive. I'm currently running a dual boot system (XP Service Pack 3 and Ubuntu 11.04 Natty Narwahl) on a Dell Inspiron B120. I'm trying to set up a new 80 GB Hitachi external HD. Using GParted, I formatted the drive and set up the partitions. The partitioning scheme is as follows 10GB NTFS Primary, 2GB Linux-Swap Primary, 50GB FAT32 Primary, 12GB Unallocated. After applying those changes, I went into Disk Utility and the HD appears along with the correct partitions. When I try to mount the volumes for partitions 1 and 3, I get a pop-up stating: Error Mounting Volume An error occurred while performing an operation on "Home" (Partition 3 of HTS548080m9AT00): The daemon is being inhibited. When I try to to check the filesystem I get a pop-up stating: Error Checking filesystem on volume An error occurred while performing an operation on "Home" (Partition 3 of HTS548080m9AT00): The daemon is being inhibited. Throughout the time that I'm attempting to troubleshoot the problem, the external drive light is on and blinking. With my frustration hitting a boiling point, I try to shut down the drive and remove it so that I can plug in a different external HD that works PERFECTLY. However, when I try to shut down and safely remove the drive, I get a pop-up stating: Error Detaching Drive An error occurred while performing an operation on "80GB Hard Disk" (HTS548080m9AT00): The daemon is being inhibited. Can anyone tell me what I'm doing wrong? I'm a newbie and not that skilled with terminal commands, so please dumb it down for me if you request specific command output.

    Read the article

  • "The daemon is being inhibited" error message when mounting volumes on a partitioned external HD [closed]

    - by Todd
    I'm having a great deal of difficulty with an external hard drive. I'm currently running a dual boot system (XP Service Pack 3 and Ubuntu 11.04 Natty Narwahl) on a Dell Inspiron B120. I'm trying to set up a new 80 GB Hitachi external HD. Using GParted, I formatted the drive and set up the partitions. The partitioning scheme is as follows 10GB NTFS Primary, 2GB Linux-Swap Primary, 50GB FAT32 Primary, 12GB Unallocated. After applying those changes, I went into Disk Utility and the HD appears along with the correct partitions. When I try to mount the volumes for partitions 1 and 3, I get a pop-up stating: Error Mounting Volume An error occurred while performing an operation on "Home" (Partition 3 of HTS548080m9AT00): The daemon is being inhibited. When I try to to check the filesystem I get a pop-up stating: Error Checking filesystem on volume An error occurred while performing an operation on "Home" (Partition 3 of HTS548080m9AT00): The daemon is being inhibited. Throughout the time that I'm attempting to troubleshoot the problem, the external drive light is on and blinking. With my frustration hitting a boiling point, I try to shut down the drive and remove it so that I can plug in a different external HD that works PERFECTLY. However, when I try to shut down and safely remove the drive, I get a pop-up stating: Error Detaching Drive An error occurred while performing an operation on "80GB Hard Disk" (HTS548080m9AT00): The daemon is being inhibited. Can anyone tell me what I'm doing wrong? I'm a newbie and not that skilled with terminal commands, so please dumb it down for me if you request specific command output.

    Read the article

  • Ubuntu 13.04 alongside Windows 8 - How to partition from Windows

    - by mengelkoch
    I plan to install Ubuntu 13.04 alongside Windows 8, and I'm looking for a CLEAR answer on how to conduct partitioning appropriately. I'm very new to all of this so a thorough explanation with minimal jargon would be great. I have an Acer Aspire M5 x64 with 6G RAM. I think I already figured out how to deal with the fast startup, UEFI and SecureBoot issues (I disabled fast startup and disabled Secure Boot). I am able to boot into Ubuntu from a LiveUSB, and I think I am ready to install Ubuntu. Note - despite some advice found here, I do have to disable SecureBoot to boot 13.04 from my LiveUSB. From what I have read here, it seems that I should (at least at first) create the partitions from WITHIN Windows 8, not from the LiveUSB, to avoid reported problems. I have run compmgmt.msc and I see the existing partitions. I see the following: Disk 0: 400 MB Recovery; 300 MB EFI System; Acer (C:) 444.95 GB (Boot, Page File, Crash Dump, Primary Partition); 20 GB Recovery Disk 1: 3.74 GB Primary Partition; 14.90 GB Primary Partition I gather I need to create a mounting point '/' Partition (??), a swap partition, and a home partition. Please explain what these are, how big they should be, how I create them from Windows Disk Management, and anything else I need to know. Eventually, I plan to fully replace Windows 8 with Ubuntu, but for now I want to run alongside Windows 8 and not screw things up. I don't have any critical files saved on this computer yet. Thanks.

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • How to move ubuntu 12.04 on another drive

    - by Maksim
    How I can move my ubuntu on another drive? I know about clonezilla but problem is that destination drive is smaller the source one. Gparted can't copy-paste partition if destination not the end last partition. I tried dpkg --selected-packages and apt-clone. First one just not install all my packages and removed existed that now I have no full unity and not my all packages. Second one just fail on configuration package. But before I did that way I copy-paste my /etc to new system. My partition table destination : gpt 1 1049kB 106MB 105MB fat32 EFI System ??????????? 2 106MB 12,1GB 12,0GB ext4 3 12,1GB 66,3GB 54,2GB ext4 source: msdos 1 1049kB 12,0GB 12,0GB primary ext4 ??????????? 2 12,0GB 492GB 480GB primary ext4 3 492GB 500GB 8107MB primary linux-swap(v1) Gpt not working with ubuntu that use grub 1.99. I don't know why but my laptop can't boot any device with uefi just black screen and ubuntu detect it on fresh install.

    Read the article

  • "The volume filesystem root has only..."

    - by jcslzr
    I am having this problem in ubuntu 12.04, but I fin strange that when I go to /tmp it wont allow me to delete some files, with message "Operation not permitted" or "this file could not be handled because you dont have permissions to read it". It is only a PC and I have the root password. I was trying to get at least 2000 MB of free space on the root file system to upgrade to 12.10 and see if that resolved the problem. Currently free space on root file system is 190 MB. This is my output: root@jcsalazar-Vostro-3550:~# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda6 7688360 7112824 184984 98% / udev 2009288 4 2009284 1% /dev tmpfs 806636 1024 805612 1% /run none 5120 0 5120 0% /run/lock none 2016584 5316 2011268 1% /run/shm /dev/sda5 472036 255920 191745 58% /boot /dev/sda7 30758848 7085480 22110900 25% /home root@jcsalazar-Vostro-3550:~# sudo parted -l Model: ATA TOSHIBA MK3261GS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 106MB 105MB primary fat16 2 106MB 15.8GB 15.7GB primary ntfs boot 3 15.8GB 278GB 262GB primary ntfs 4 278GB 320GB 41.9GB extended 5 278GB 279GB 499MB logical ext4 6 279GB 287GB 7999MB logical ext4 7 287GB 319GB 32.0GB logical ext4 8 319GB 320GB 1443MB logical linux-swap(v1) I apprecciate any new ideas that can help me. Thnx Carlos

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • Manage Files Easier With Aero Snap in Windows 7

    - by Mysticgeek
    Before the days of Aero Snap you would need to arrange your Windows in some weird way to see all of your files. Today we show you how to quickly use the Aero Snap feature get it done in few key strokes in Windows 7. You can of course navigate the windows in Explorer to get them so you can see everything side by side, or use a free utility like Cubic Explorer.   Getting Explorer Windows Side by Side The process is actually simple but quite useful when looking for a large amount of data. Right-click the Windows Explorer icon on the taskbar and click Windows Explorer. Our first window opens up and you can certainly drag it over the the right or left side of the screen but the quickest method we’re using is the “Windows Key+Right Arrow” key combo (make sure to hold the Windows key down). Now the Windows is nicely placed on the right side. Next we want to open the other window, simply right-click the Explorer icon again and click Windows Explorer.   Now we have our second window open, and all we need to do this time is use the Windows Key+Left Arrow combination. There we go! Now you should be able to browse your files a lot more simply than relying on the expanding tree method (as much). You can actually use this method to snap a window to all four corners of your screen if you don’t feel like dragging it. Once you play with Aero Snap more you may enjoy it, but if you still despise it, you can disable it too! Similar Articles Productive Geek Tips Multitask Like a Pro with AquaSnapUse Windows Vista Aero through Remote Desktop ConnectionEasily Disable Win 7 or Vista’s Aero Before Running an Application (Such as a Video Game)Understanding Windows Vista Aero Glass RequirementsFree Storage With AOL’s Xdrive (Online Storage Series) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • Unable to ssh out anywhere - ssh_exchange_identification

    - by Chowlett
    I have a setup where I'm running Ubuntu 11.10 as a VirtualBox guest under a Windows 7 host, behind a restrictive corporate firewall. I have set up NAT from the host port 22 to Ubuntu's port 22; IT inform me that they have opened port 22 outbound for the host machine's IP address. I have run ssh-keygen -t rsa, and am trying to test the setup by connecting to github and another known ssh server. In both cases the connect is refused with ssh_exchange_identification: Connection closed by remote host. Full -vvv log is below. Is this possibly still due to the corporate firewall? If so, what else might I need to request from them? Any other ideas what might be wrong and how to fix it? ~$ ssh -Tvvv [email protected] OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/chris/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/chris/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/chris/.ssh/id_rsa-cert type -1 debug1: identity file /home/chris/.ssh/id_dsa type -1 debug1: identity file /home/chris/.ssh/id_dsa-cert type -1 debug1: identity file /home/chris/.ssh/id_ecdsa type -1 debug1: identity file /home/chris/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host Edit: Requested diagnostics: ~$ ls -la ~/.ssh total 16 drwx------ 2 chris chris 4096 2012-03-30 13:12 . drwxr-xr-x 29 chris chris 4096 2012-03-30 13:25 .. -rw------- 1 chris chris 1766 2012-03-30 13:12 id_rsa -rw-r--r-- 1 chris chris 409 2012-03-30 13:12 id_rsa.pub

    Read the article

  • Using MAC Authentication for simple Web API’s consumption

    - by cibrax
    For simple scenarios of Web API consumption where identity delegation is not required, traditional http authentication schemas such as basic, certificates or digest are the most used nowadays. All these schemas rely on sending the caller credentials or some representation of it in every request message as part of the Authorization header, so they are prone to suffer phishing attacks if they are not correctly secured at transport level with https. In addition, most client applications typically authenticate two different things, the caller application and the user consuming the API on behalf of that application. For most cases, the schema is simplified by using a single set of username and password for authenticating both, making necessary to store those credentials temporally somewhere in memory. The true is that you can use two different identities, one for the user running the application, which you might authenticate just once during the first call when the application is initialized, and another identity for the application itself that you use on every call. Some cloud vendors like Windows Azure or Amazon Web Services have adopted an schema to authenticate the caller application based on a Message Authentication Code (MAC) generated with a symmetric algorithm using a key known by the two parties, the caller and the Web API. The caller must include a MAC as part of the Authorization header created from different pieces of information in the request message such as the address, the host, and some other headers. The Web API can authenticate the caller by using the key associated to it and validating the attached MAC in the request message. In that way, no credentials are sent as part of the request message, so there is no way an attacker to intercept the message and get access to those credentials. Anyways, this schema also suffers from some deficiencies that can generate attacks. For example, brute force can be still used to infer the key used for generating the MAC, and impersonate the original caller. This can be mitigated by renewing keys in a relative short period of time. This schema as any other can be complemented with transport security. Eran Rammer, one of the brains behind OAuth, has recently published an specification of a protocol based on MAC for Http authentication called Hawk. The initial version of the spec is available here. A curious fact is that the specification per se does not exist, and the specification itself is the code that Eran initially wrote using node.js. In that implementation, you can associate a key to an user, so once the MAC has been verified on the Web API, the user can be inferred from that key. Also a timestamp is used to avoid replay attacks. As a pet project, I decided to port that code to .NET using ASP.NET Web API, which is available also in github under https://github.com/pcibraro/hawknet Enjoy!.

    Read the article

  • How to Add Control Panel to “My Computer” in Windows 7 or Vista

    - by The Geek
    Back in the Windows XP days, you could easily add Control Panel to My Computer with a simple checkbox in the folder view settings. Windows 7 and Vista don’t make this quite as easy, but there’s still a way to get it back. To make this tweak, we’ll be doing a quick registry hack, but there’s a downloadable version provided as well. Manual Registry Tweak to Add Control Panel Open up regedit.exe through the start menu search or run box, and then browse down to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace Now that you’re there, you’ll need to right-click and create a new key… If you want to add the regular Control Panel view, with the categories, you’ll need to use one GUID as the name of the key. If you want the icon view instead, you can use the other key. Here they are: Category View:  {26EE0668-A00A-44D7-9371-BEB064C98683} Icon View: {21EC2020-3AEA-1069-A2DD-08002B30309D} Once you’re done, it should look like this: Now over in the Computer view, just hit the F5 key to refresh the panel, and you should see the new icon pop up in the list: Now when you click on the icon you’ll be taken to Control Panel. If you didn’t know how to change the view before, you can use the drop-down box on the right-hand side to switch between Category and icon view. Downloadable Registry Hack Rather than deal with manual registry editing, you can simply download the file, extract it, and then either double-click on the AddCategoryControlPanel.reg to add the Category view icon, or AddIconControlPanel.reg to add the other icon. There’s an uninstall script provided for each. Download ControlPanelMyComputer Registry Hack from howtogeek.com Similar Articles Productive Geek Tips Disable User Account Control (UAC) the Easy Way on Win 7 or VistaHow To Figure Out Your PC’s Host Name From the Command PromptRestore Missing Desktop Icons in Windows 7 or VistaNew Vista Syntax for Opening Control Panel Items from the Command-lineAdd Registry Editor to Control Panel TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • Ed Burns' Servlet 4/HTTP 2 Session at JavaOne 2014

    - by reza_rahman
    For the Java EE track at JavaOne 2014 we are highlighting some key sessions and speakers to better inform you of what you can expect, right up until the start of the conference. To this end we recently interviewed Ed Burns. Ed is a veteran of Sun and now Oracle. He has been and is instrumental in pushing the JSF ecosystem forward as specification lead. Besides his specification lead work Ed is well regarded as an author and speaker on his own right. In addition to carrying the JSF torch Ed will be co-leading the key Servlet 4 specification for Java EE 8, along with Servlet specification guru Shing Wai Chan. The primary goal of Servlet 4 is to enable the fundamentally important changes in HTTP 2 for the entire server-side Java ecosystem. We wanted to talk to Ed about his Servlet 4 session at JavaOne 2014 and HTTP 2 generally: The details for the Servlet 4 session can be found here. Ed has several other key sessions on the track that we hope to talk to him about separately in the near future: What’s Next for JSF?: In this key session, Ed will be sharing the next steps for the continued evolution of the JSF specification in Java EE 8. Where’s My UI? The 2014 JavaOne Web App UI Smackdown: The UI space for web applications, especially in the Java ecosystem continues to be as hotly contested as ever. This is especially true with the (re)introduction of JavaScript based rich client frameworks like AngularJS. This lively panel brings together experts representing the diverse schools of thought for web UIs. Ed will be representing JSF of course. Neal Ford will moderate the panel as an independent and hopefully reasonably neutral party. Adopt-a-JSR for Java EE 7 and Java EE 8: Adopt-a-JSR has been a reasonable success for Java EE 7. With Java EE 8 we are planning to strengthen it far more as away of getting grassroots level participation in the specification efforts. This session will introduce Adopt-a-JSR, share how it worked for Java EE 7 and what we plan to do with it in Java EE 8. Ed will be sharing his perspectives on Adopt-a-JSR for both Java EE 7 and Java EE 8. Besides Ed's sessions, we have a very strong program for the Java EE track and JavaOne overall - just explore the content catalog. If you can't make it, you can be assured that we will make key content available after the conference just as we have always done.

    Read the article

  • Ed Burns' Servlet 4/HTTP 2 Session at JavaOne

    - by Yolande Poirier
    By Guest Blogger Reza Rahman For the Java EE track at JavaOne 2014 we are highlighting some key sessions and speakers to better inform you of what you can expect, right up until the start of the conference. To this end we recently interviewed Ed Burns. Ed is a veteran of Sun and now Oracle. He has been and is instrumental in pushing the JSF ecosystem forward as specification lead. Besides his specification lead work Ed is well regarded as an author and speaker on his own right. In addition to carrying the JSF torch Ed will be co-leading the key Servlet 4 specification for Java EE 8, along with Servlet specification guru Shing Wai Chan. The primary goal of Servlet 4 is to enable the fundamentally important changes in HTTP 2 for the entire server-side Java ecosystem. We wanted to talk to Ed about his Servlet 4 session at JavaOne 2014 and HTTP 2 generally: The details for the Servlet 4 session can be found here. Ed has several other key sessions on the track that we hope to talk to him about separately in the near future: What’s Next for JSF?: In this key session, Ed will be sharing the next steps for the continued evolution of the JSF specification in Java EE 8. Where’s My UI? The 2014 JavaOne Web App UI Smackdown: The UI space for web applications, especially in the Java ecosystem continues to be as hotly contested as ever. This is especially true with the (re)introduction of JavaScript based rich client frameworks like AngularJS. This lively panel brings together experts representing the diverse schools of thought for web UIs. Ed will be representing JSF of course. Neal Ford will moderate the panel as an independent and hopefully reasonably neutral party. Adopt-a-JSR for Java EE 7 and Java EE 8: Adopt-a-JSR has been a reasonable success for Java EE 7. With Java EE 8 we are planning to strengthen it far more as away of getting grassroots level participation in the specification efforts. This session will introduce Adopt-a-JSR, share how it worked for Java EE 7 and what we plan to do with it in Java EE 8. Ed will be sharing his perspectives on Adopt-a-JSR for both Java EE 7 and Java EE 8. Besides Ed's sessions, we have a very strong program for the Java EE track and JavaOne overall - just explore the content catalog. If you can't make it, you can be assured that we will make key content available after the conference just as we have always done.

    Read the article

  • Is MongoDB a good choice or not for my application?

    - by shubham
    I have a Reporting application which stores the reports in xml format as recieved from source (XML schema is not defined, it can be any format) and those reports contain some keys and values. Like jobid, setid be keys for 1 type of report and userid, groupId for another type of report etc. The type of keys that can be referred from the document is determined by the namespaces used in the xml doc. These keys are stored on the basis of namespace used in the xml document. For e.g. If a tag in xml fragment uses namespace= "myspace1", then I have keys A and B for myspace1 stored in another table. It will fetch those keys from that table for this namespace, look for their values in xml doc and store it in another table along with the pointer to this xml document (Id of a record storing complete xml document in a cell). Use cases: When the user comes and queries for that key and value, I return the document or a set of documents that are having those key/value pairs. When the user comes and queries for a certain key and provide a name for xslt (pre stored), I fetch the set of documents fulfilling that criteria and convert that xml to html with the specified xslt. When the user comes and asks for a particular fragment of a doc then it can fetch a subset from a particular document also. When the user comes and queries for top x values of a certain key, I return the set of documents that are having top 10 values of that key. I am using DB2 database for its support of xml along with relational capabilities. That makes easier for me to run xpath expressions and fetch values of keys and also aggregate a set of documents fullfilling a criteria, all on the database side. Problems: DB2 stores XML doc of upto 2GB in size. Retrieval is very slow. If some thing involves many documents, then it takes significant time for things to show up in browser, and the user has to wait. Can MongoDb help in this case, as it is document oriented? can I do xml related xpath queries and document transformations on db side? Or is it ok to use both in such a case?

    Read the article

  • Finding "Stuff" In OUM

    - by Dave Burke
    One of the first questions people asked when they start using the Oracle Unified Method (OUM) is “how do I find X ?” Well of course no one is really looking for “X”!! but typically an OUM user might know the Task ID, or part of the Task Name, or maybe they just want to find out if there is any content within OUM that is related to a couple of keys words they have in their mind. Here are three quick tips I give people: 1. Open up one of the OUM Views, then click “Expand All”, and then use your Browser’s search function to locate a key Word. For example, Google Chrome or Internet Explorer: <CTRL> F, then type in a key Word, i.e. Architecture This is fast and easy option to use, but it only searches the current OUM page 2. Use the PDF view of OUM Open up one of the OUM Views, and then click the PDF View button located at the top of the View. Depending on your Browser’s settings, the PDF file will either open up in a new Window, or be saved to your local machine. In either case, once the PDF file is open, you can use the built in PDF search commands to search for key words across a large portion of the OUM Method Pack. This is great option for searching the entire Full Method View of OUM, including linked HTML pages, however the search will not included linked Documents, i.e. Word, Excel. 3. Use your operating systems file index to search for key words This is my favorite option, and one I use virtually every day. I happen to use Windows Search, but you could also use Google Desktop Search, of Finder on a MAC. All you need to do (on a Windows machine) is to make sure your local OUM folder structure is included in the Windows Index. Go to Control Panel, select Indexing Options, and ensure your OUM folder is included in the Index, i.e. C:/METHOD/OM40/OUM_5.6 Once your OUM folders are indexed, just open up Windows Search (or Google Desktop Search) and type in your key worlds, i.e. Unit Testing The reason I use this option the most is because the Search will take place across the entire content of the Indexed folders, included linked files. Happy searching!

    Read the article

  • Problems with inheritance query view and one to many association in entity framework 4

    - by Kazys
    Hi, I have situation in with I stucked and don't know way out. The problem is in my bigger model, but I have made small example which shows the same problem. I have 4 tables. I called them SuperParent, NamedParent, TypedParent and ParentType. NamedParent and TypedParent derives from superParent. TypedParent has one to many association with ParentType. I describe mapping for entities using queryView. The problem is then I want to get TypedParents and Include ParentType I get the following exception: An error occurred while preparing the command definition. See the inner exception for details. --- System.ArgumentException: The ResultType of the specified expression is not compatible with the required type. The expression ResultType is 'Transient.reference[PasibandymaiModel.SuperParent]' but the required type is 'Transient.reference[PasibandymaiModel.TypedParent]'. Parameter name: arguments[1] To get TypedParents I use following code: context.SuperParent.OfType().Include("ParentType"); my edmx file: <edmx:Edmx Version="2.0" xmlns:edmx="http://schemas.microsoft.com/ado/2008/10/edmx"> <!-- EF Runtime content --> <edmx:Runtime> <!-- SSDL content --> <edmx:StorageModels> <Schema Namespace="PasibandymaiModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2005" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2009/02/edm/ssdl"> <EntityContainer Name="PasibandymaiModelStoreContainer"> <EntitySet Name="NamedParent" EntityType="PasibandymaiModel.Store.NamedParent" store:Type="Tables" Schema="dbo" /> <EntitySet Name="ParentType" EntityType="PasibandymaiModel.Store.ParentType" store:Type="Tables" Schema="dbo" /> <EntitySet Name="SuperParent" EntityType="PasibandymaiModel.Store.SuperParent" store:Type="Tables" Schema="dbo" /> <EntitySet Name="TypedParent" EntityType="PasibandymaiModel.Store.TypedParent" store:Type="Tables" Schema="dbo" /> <AssociationSet Name="fk_NamedParent_SuperParent" Association="PasibandymaiModel.Store.fk_NamedParent_SuperParent"> <End Role="SuperParent" EntitySet="SuperParent" /> <End Role="NamedParent" EntitySet="NamedParent" /> </AssociationSet> <AssociationSet Name="fk_TypedParent_ParentType" Association="PasibandymaiModel.Store.fk_TypedParent_ParentType"> <End Role="ParentType" EntitySet="ParentType" /> <End Role="TypedParent" EntitySet="TypedParent" /> </AssociationSet> <AssociationSet Name="fk_TypedParent_SuperParent" Association="PasibandymaiModel.Store.fk_TypedParent_SuperParent"> <End Role="SuperParent" EntitySet="SuperParent" /> <End Role="TypedParent" EntitySet="TypedParent" /> </AssociationSet> </EntityContainer> <EntityType Name="NamedParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" /> <Property Name="Name" Type="nvarchar" Nullable="false" MaxLength="100" /> </EntityType> <EntityType Name="ParentType"> <Key> <PropertyRef Name="ParentTypeId" /> </Key> <Property Name="ParentTypeId" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="Name" Type="nvarchar" MaxLength="100" /> </EntityType> <EntityType Name="SuperParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="SomeAttribute" Type="nvarchar" Nullable="false" MaxLength="100" /> </EntityType> <EntityType Name="TypedParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" /> <Property Name="ParentTypeId" Type="int" Nullable="false"/> </EntityType> <Association Name="fk_NamedParent_SuperParent"> <End Role="SuperParent" Type="PasibandymaiModel.Store.SuperParent" Multiplicity="1" /> <End Role="NamedParent" Type="PasibandymaiModel.Store.NamedParent" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="SuperParent"> <PropertyRef Name="ParentId" /> </Principal> <Dependent Role="NamedParent"> <PropertyRef Name="ParentId" /> </Dependent> </ReferentialConstraint> </Association> <Association Name="fk_TypedParent_ParentType"> <End Role="ParentType" Type="PasibandymaiModel.Store.ParentType" Multiplicity="1" /> <End Role="TypedParent" Type="PasibandymaiModel.Store.TypedParent" Multiplicity="*" /> <ReferentialConstraint> <Principal Role="ParentType"> <PropertyRef Name="ParentTypeId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentTypeId" /> </Dependent> </ReferentialConstraint> </Association> <Association Name="fk_TypedParent_SuperParent"> <End Role="SuperParent" Type="PasibandymaiModel.Store.SuperParent" Multiplicity="1" /> <End Role="TypedParent" Type="PasibandymaiModel.Store.TypedParent" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="SuperParent"> <PropertyRef Name="ParentId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentId" /> </Dependent> </ReferentialConstraint> </Association> <Function Name="ChildDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ChildId" Type="int" Mode="In" /> </Function> <Function Name="ChildInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="ChildUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ChildId" Type="int" Mode="In" /> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="NamedParentDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="NamedParentInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> </Function> <Function Name="NamedParentUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="ParentTypeDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> </Function> <Function Name="ParentTypeInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="ParentTypeUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="TypedParentDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="TypedParentInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> </Function> <Function Name="TypedParentUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> </Function> </Schema> </edmx:StorageModels> <!-- CSDL content --> <edmx:ConceptualModels> <Schema Namespace="PasibandymaiModel" Alias="Self" xmlns:annotation="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns="http://schemas.microsoft.com/ado/2008/09/edm"> <EntityContainer Name="PasibandymaiEntities" annotation:LazyLoadingEnabled="true"> <EntitySet Name="ParentType" EntityType="PasibandymaiModel.ParentType" /> <EntitySet Name="SuperParent" EntityType="PasibandymaiModel.SuperParent" /> <AssociationSet Name="ParentTypeTypedParent" Association="PasibandymaiModel.ParentTypeTypedParent"> <End Role="ParentType" EntitySet="ParentType" /> <End Role="TypedParent" EntitySet="SuperParent" /> </AssociationSet> </EntityContainer> <EntityType Name="NamedParent" BaseType="PasibandymaiModel.SuperParent"> <Property Type="String" Name="Name" Nullable="false" MaxLength="100" FixedLength="false" Unicode="true" /> </EntityType> <EntityType Name="ParentType"> <Key> <PropertyRef Name="ParentTypeId" /> </Key> <Property Type="Int32" Name="ParentTypeId" Nullable="false" annotation:StoreGeneratedPattern="Identity" /> <Property Type="String" Name="Name" MaxLength="100" FixedLength="false" Unicode="true" /> <NavigationProperty Name="TypedParent" Relationship="PasibandymaiModel.ParentTypeTypedParent" FromRole="ParentType" ToRole="TypedParent" /> </EntityType> <EntityType Name="SuperParent" Abstract="true"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Type="Int32" Name="ParentId" Nullable="false" annotation:StoreGeneratedPattern="Identity" /> <Property Type="String" Name="SomeAttribute" Nullable="false" MaxLength="100" FixedLength="false" Unicode="true" /> </EntityType> <EntityType Name="TypedParent" BaseType="PasibandymaiModel.SuperParent"> <NavigationProperty Name="ParentType" Relationship="PasibandymaiModel.ParentTypeTypedParent" FromRole="TypedParent" ToRole="ParentType" /> <Property Type="Int32" Name="ParentTypeId" Nullable="false" /> </EntityType> <Association Name="ParentTypeTypedParent"> <End Type="PasibandymaiModel.ParentType" Role="ParentType" Multiplicity="1" /> <End Type="PasibandymaiModel.TypedParent" Role="TypedParent" Multiplicity="*" /> <ReferentialConstraint> <Principal Role="ParentType"> <PropertyRef Name="ParentTypeId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentTypeId" /> </Dependent> </ReferentialConstraint> </Association> </Schema> </edmx:ConceptualModels> <!-- C-S mapping content --> <edmx:Mappings> <Mapping Space="C-S" xmlns="http://schemas.microsoft.com/ado/2008/09/mapping/cs"> <EntityContainerMapping StorageEntityContainer="PasibandymaiModelStoreContainer" CdmEntityContainer="PasibandymaiEntities"> <EntitySetMapping Name="ParentType"> <QueryView> SELECT VALUE PasibandymaiModel.ParentType(tp.ParentTypeId, tp.Name) FROM PasibandymaiModelStoreContainer.ParentType AS tp </QueryView> </EntitySetMapping> <EntitySetMapping Name="SuperParent"> <QueryView> SELECT VALUE CASE WHEN (np.ParentId IS NOT NULL) THEN PasibandymaiModel.NamedParent(sp.ParentId, sp.SomeAttribute, np.Name) WHEN (tp.ParentId IS NOT NULL) THEN PasibandymaiModel.TypedParent(sp.ParentId, sp.SomeAttribute, tp.ParentTypeId) END FROM PasibandymaiModelStoreContainer.SuperParent AS sp LEFT JOIN PasibandymaiModelStoreContainer.NamedParent AS np ON sp.ParentId = np.ParentId LEFT JOIN PasibandymaiModelStoreContainer.TypedParent AS tp ON sp.ParentId = tp.ParentId </QueryView> <QueryView TypeName="PasibandymaiModel.TypedParent"> SELECT VALUE PasibandymaiModel.TypedParent(sp.ParentId, sp.SomeAttribute, tp.ParentTypeId) FROM PasibandymaiModelStoreContainer.SuperParent AS sp INNER JOIN PasibandymaiModelStoreContainer.TypedParent AS tp ON sp.ParentId = tp.ParentId </QueryView> <QueryView TypeName="PasibandymaiModel.NamedParent"> SELECT VALUE PasibandymaiModel.NamedParent(sp.ParentId, sp.SomeAttribute, np.Name) FROM PasibandymaiModelStoreContainer.SuperParent AS sp INNER JOIN PasibandymaiModelStoreContainer.NamedParent AS np ON sp.ParentId = np.ParentId </QueryView> </EntitySetMapping> </EntityContainerMapping> </Mapping> </edmx:Mappings> </edmx:Runtime> </edmx:Edmx> I have tried using AssociationSetMapping instead of using Association with ReferentialConstraint. But then couldn't insert related entities at once, becouse entity framework didn't provided entity key of inserted entities for related entities. Thanks for any idea

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >