Search Results

Search found 2313 results on 93 pages for 'twice'.

Page 44/93 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Using "gedit", a string of errors occours

    - by Kumuluzz
    I'm trying to program some small programs in C in terminal and gedit. But everytime i use gedit then a string of errors occours. When i open a new file nothing happens. But in the exact same moment i save the file, then a string of erros coour. Also if i open an already existing file (not a new one), then when the gedit window opens the old file all the lines of errors are writen. In both cases in less than a second and nothing more happens. An example to the error: "error: line 35272: 0 is wrong flag id". They are all similar to this, except the line number is different. There are like 50 of them. I'm running 11.10, just installed it a couple of days again (yes, i'm a newbie) and i've updated all the files recently. I've tried reinstalling gedit via: sudo apt-get --reinstall install gedit It kinda made it worse, now a lot of the lines are shown twice. So now it goes (this is a copy of the first lines of error): error: line 6787: 0 is wrong flag id error: line 10034: 0 is wrong flag id error: line 10034: 0 is wrong flag id error: line 11351: 0 is wrong flag id error: line 11351: 0 is wrong flag id error: line 11849: 0 is wrong flag id error: line 11849: 0 is wrong flag id error: line 15609: 0 is wrong flag id error: line 15609: 0 is wrong flag id error: line 19814: 0 is wrong flag id

    Read the article

  • Is Reading the Spec Enough?

    - by jozefg
    This question is centered around Scheme but really could be applied to any LISP or programming language in general. Background So I recently picked up Scheme again having toyed with it once or twice before. In order to solidify my understanding of the language, I found the Revised^5 Report on the Algorithmic Language Scheme and have been reading through that along with my compiler/interpreter's (Chicken Scheme) listed extensions/implementations. Additionally, in order to see this applied I have been actively seeking out Scheme code in open source projects and such and tried to read and understand it. This has been sufficient so far for me understanding the syntax of Scheme and I've completed almost all of the Ninety-nine Scheme problems (see here) as well as a decent number of Project Euler problems. Question While so far this hasn't been an issue and my solutions closely match those provided, am I missing out on a great part of Scheme? Or to phrase my question more generally, does reading the specification of a language along with well written code in that language sufficient to learn from? Or are other resources, books, lectures, videos, blogs, etc necessary for the learning process as well.

    Read the article

  • Reliance on the compiler

    - by koan
    I've been programming in C and C++ for some time, although I would say I'm far from being expert. For some time I've been using various strategies to develop my code such as unit tests, test driven design, code reviews and so on. When I wrote my first programs in BASIC I typed in long listings before finding they would not run and they were a nightmare to debug. So I learnt to write a small bit and then test it. These days I often find myself repeatedly writing a small bit of code then using the compiler to find all the mistakes. That's OK if it picks up a typo but when you start adjusting the parameters types etc just to make it compile you can screw up the design. It also seems that the compiler is creeping into the design process when it should only be used for checking syntax. There's a danger here of over reliance on the compiler to make my programs better. Are there better strategies than this ? I vaguely remember some time ago an article on a company developing a type of C compiler where an extra header file also specified the prototypes. The idea was that inconsistencies in the API definition would be easier to catch if you had to define it twice in different ways.

    Read the article

  • Trouble installing from disk

    - by SuperNatural
    I'm writing this in desperation, Windows is slowly killing me and i need to change my home pc os to Ubuntu 11.04 as soon as possible. I created a USB flash drive to install ubuntu, twice, and both times they failed to begin install on restart of my pc. i read on another forum that you might have to change some boot sequence in BIOS but when pressing F2 to enter it didnt work. After a lot of cursing, I made myself an UBUNTU install cd and booted. To my excitement, it now displayed... try ubuntu and install ubuntu. i clicked install ubuntu which lead me to the preparing to install ubuntu display, i checked download updates while installing and clicked forward. The very next display is ' allocate drive space ' i assume there are meant to be options of drives provided but mine is just a blank box and underneath all the options to create a new partition table, add, change, delete and revert are all greyed out. There is a drop down menu labelled 'device for boot loader installlation' but the only option is /dev/sda. when i click install, a no root file system error comes up telling me to please correct from the partitioning menu. I am extremely frustrated. please!! can anyone help me...

    Read the article

  • Just installed Ubuntu 12.04. When booting, all I get is a black screen with cursor

    - by user66378
    Installation appears to go fine. After rebooting, I get my motherboard loading screens, but when it comes time for Ubuntu to boot, I just get a black screen with a blinking white underscore in the top-left - same as I got when waiting for the install CD to load, except it lasts forever. The only keypress it seems to recognize is ctrl+alt+del, which reboots. Letters don't register, function keys w/ or w/o modifiers do nothing. I've installed Ubuntu 12.04 twice and got the same error. The first time, I installed it as the only OS, and had it take up the whole disk. The second time, I installed Windows 7 first, then Ubuntu by specifying custom partitions. After this install, it would boot straight to Windows without showing grub. I used EasyBCD to add the Ubuntu installation to grub, and this got grub to show, and let me select it, but it led back to the same error described up top. I've had Linux Mint 11 and 12 installed on this PC, but was unable to get previous versions of Ubuntu to install (always had errors while installing, not after). Hardware: Intel Core i7-2600K Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 ASUS SABERTOOTH P67 (REV 3.0) LGA 1155 Intel P67 SATA 6Gb/s USB 3.0 ATX Intel Motherboard EVGA 01G-P3-1371-TR GeForce GTX 460 (Fermi) CORSAIR Vengeance 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Western Digital RE4 WD5003ABYX 500GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive

    Read the article

  • 3.2.0-31-generic version doesn't boot

    - by user92526
    12.04 used to work just fine a month ago. Since August, I installed all of the available updates. Since then my mac-pro is acting weird. If I restart the computer, the screen gets stuck in the purple back ground without any texts. Then if I restart again, I can choose what versions of linux I want to run. If I select 3.2.0-31-generic, the machine gets stuck in a state of blinking cursor. If I boot again to run 3.2.0-31-generic recovery mode, the machine gets stuck in a "..... memory freed End of Stack" mode. I can't boot through this kernel. I have to choose an older version of linux like 3.2.0-12-generic to boot into my mac pro, but I have too boot twice to get into this version. I was wondering --if you could provide me a solution to boot the 3.2.0-31-generic version. -- if you could provide me a solution to boot into any version the first time I restart. (Usually I have to kill the first restart) --- If there is an option in linux to choose what version I want to run every time I restart so that I don't have to choose everytime. Thank you

    Read the article

  • AR9485 wireless card loses ability to connect after hibernation

    - by DrewDiezel
    This has happened twice now, the first time it happened I just reinstalled Ubuntu. The only things that I've been messing with in terms of networking, have been restarting the network-manager service after I resume from hibernate (because it never works right after resume, but restarting the service fixes that), and changing my mac address with ifconfig wlan0 down && ifconfig wlan0 hw ether 11:22:33:44:55 && ifconfig wlan0 up. using tail | dmesg returns a bunch of attempts to connect, saying something like authenticating ... okay you're connected! now I'm going to just disconnect because I feel like it. (reason=3). Any ideas? If it helps maybe I'll add a picture of the tail|dmesg output later. My wireless drivers are as follows: Atheros AR9485 Wireless Network Adapter Hamachi Network Interface Microsoft Virtual Wifi Miniport Adapter Realtek PCIe GBE Family Controller VirtualBox Host-Only Ethernet Adapter Here is a pastebin of some terminal outputs that could help. Also, the reddit thread has a few answers, for anyone curious. It's located here. It is entirely possible that it's just my school wifi blocking out my laptop, and my home wifi sucks. Wifi does work at places other than my home and school, however my phone can connect to both places. Here is a pastebin of the results of the wireless-info script.

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Python XLWT attempt to overwrite cell workaround

    - by PPTim
    Hi, Using the python module xlwt, writing to the same cell twice throws an error: Message File Name Line Position Traceback <module> S:\******** write C:\Python26\lib\site-packages\xlwt\Worksheet.py 1003 write C:\Python26\lib\site-packages\xlwt\Row.py 231 insert_cell C:\Python26\lib\site-packages\xlwt\Row.py 150 Exception: Attempt to overwrite cell: sheetname=u'Sheet 1' rowx=1 colx=12 with the code snippet def insert_cell(self, col_index, cell_obj): if col_index in self.__cells: if not self.__parent._cell_overwrite_ok: msg = "Attempt to overwrite cell: sheetname=%r rowx=%d colx=%d" \ % (self.__parent.name, self.__idx, col_index) raise Exception(msg) #row 150 prev_cell_obj = self.__cells[col_index] sst_idx = getattr(prev_cell_obj, 'sst_idx', None) if sst_idx is not None: self.__parent_wb.del_str(sst_idx) self.__cells[col_index] = cell_obj Looks like the code 'raise'es an exception which halts the entire process. Is removing the 'raise' term enough to allow for overwriting cells? I appreciate xlwt's warning, but i thought the pythonic way is to assume "we know what we're doing". I don't want to break anything else in touching the module.

    Read the article

  • Org-mode properties for Emacs diary anniversaries?

    - by lecodesportif
    I am trying to have the "Birthday" property of an Org-mode contact entry added to the agenda automatically: * John :PROPERTIES: :Name: John :Birthday: 5 4 1900 :END: This can be done manually for each entry using: %%(diary-anniversary 5 4 1900) John's birthday But I don't want to type the date twice. I would like to use the org-entry-get functionality to make diary-anniversary take the values of the Birthday and Name (see the bold text above) properties. This is how I get the correct property values. %%(org-entry-get nil "Name") %%(org-entry-get nil "Birthday") But after several attempts, I still haven't managed to put the values in variables and pass them correctly to diary-anniversary. Any ideas how to do it?

    Read the article

  • Cannot create multiple instances of PowerPoint

    - by Excel20
    I'm working on a project where I need to use PowerPoint from C#.net. Initially, I always created one single instance. As of today, I would like to have multiple instance running. I do that like so: Type powerpointType = Type.GetTypeFromProgID("PowerPoint.Application"); object instance1 = Activator.CreateInstance(powerpointType); object instance2 = Activator.CreateInstance(powerpointType); but when I ask for the handle of both instances, by calling hwnd = (int)powerpointType.GetProperty("HWND").GetValue(instance1, null); then I get the same handle twice. My conclusion is that the application is started just once, and the TaskManager comfirms that: Only one process. How come there is only one instance of PowerPoint running, and how can I make it work?

    Read the article

  • Simple python oo issue

    - by Alex K
    Hello, Have a look a this simple example. I don't quite understand why o1 prints "Hello Alex" twice. I would think that because of the default self.a is always reset to the empty list. Could someone explain to me what's the rationale here? Thank you so much. class A(object): def __init__(self, a=[]): self.a = a o = A() o.a.append('Hello') o.a.append('Alex') print ' '.join(o.a) # >> prints Hello Alex o1 = A() o1.a.append('Hello') o1.a.append('Alex') print ' '.join(o1.a) # >> prints Hello Alex Hello Alex

    Read the article

  • DataGridView row added event

    - by p4bl0.666
    Hi all, I'm using a DataGridView and I bind a List to the DataSource. I already have the right columns and I map exactly the fields. What I'm trying to do is handling a sort of RowAdded or RowDataBound (like in aspx GridView) event. The only event that I found is RowsAdded but no matter how many items I have, it is fired only 4 times the first time i bound, and twice the other times, with values e.RowCount:1 e.RowIndex:0 e.RowCount:[n-1] e.RowIndex:1 *where n is the number of my items is there a way I can get to a handle for each item?

    Read the article

  • iPhone: Failed to launch simulated application: Unknown error.

    - by Schubert
    This is a new iPhone project, only 1 target (different from this question) On build we get: Failed to launch simulated application: Unknown error. The google again gives us nothing, lots of people have encountered this and there are lots of crazy ideas to try "oh clean the build", "clear the cache", "twiddle this flag" and none of them work and work consistently. We can reproduce this on two different machines with SDK 2.2.1 and 3.0 beta. Not the install on the machines since other iphone projects work just fine so we believe it has something to do with the config of this particular project but after combing through the config twice we can't spot the problem. Vanna, I'd like to buy a clue for $200 please. Tried: XCode menu-Clear cache Tried: clean all targets Tried: rm -rf ~/Library/Application\ Support/iPhone\ Simulator

    Read the article

  • Django: UserProfile with Unique Foreign Key in Django Admin

    - by lazerscience
    Hi, I have extended Django's User Model using a custom user profile called UserExtension. It is related to User through a unique ForeignKey Relationship, which enables me to edit it in the admin in an inline form! I'm using a signal to create a new profile for every new user: def create_user_profile(sender, instance, created, **kwargs): if created: try: profile, created = UserExtension.objects.get_or_create(user=instance) except: pass post_save.connect(create_user_profile, sender=User) (as described here for example: http://stackoverflow.com/questions/44109/extending-the-user-model-with-custom-fields-in-django) The problem is, that, if I create a new user through the admin, I get an IntegritiyError on saving "column user_id is not unique". It doesnt seem that the signal is called twice, but i guess the admin is trying to save the profile AFTERWARDS? But I need the creation through signal if I create a new user in other parts of the system!

    Read the article

  • Translate query to NHibernate

    - by Rob Walker
    I am trying to learn NHibernate, and am having difficulty translating a SQL query into one using the criteria API. The data model has tables: Part (Id, Name, ...), Order (Id, PartId, Qty), Shipment (Id, PartId, Qty) For all the parts I want to find the total quantity ordered and the total quantity shipped. In SQL I have: select shipment.part_id, sum(shipment.quantity), sum(order.quantity) from shipment cross join order on order.part_id = shipment.part_id group by shipment.part_id Alternatively: select id, (select sum(quantity) from shipment where part_id = part.id), (select sum(quantity) from order where part_id = part.id) from part But the latter query takes over twice as long to execute. Any suggestions on how to create these queries in (fluent) NHibernate? I have all the tables mapped and loading/saving/etc the entities works fine.

    Read the article

  • HTTP Header - ntCoent-Length

    - by DMcKenna
    I get the following HTTP response headers in a particular response. All looks okay. However I have noticed that the content-length appears twice... Content-Length: 2424 ntCoent-Length: 2424 Is there a particular reason why the content-length is returned a second time as ntCoent-Length? HTTP/1.0 200 OK Date: Wed, 26 May 2010 09:38:19 GMT Server: Apache P3P: CP="NOI DSP COR CURa ADMa TA1a OUR BUS IND UNI COM NAV INT" Accept-Charset: iso-8859-1, unicode-1-1;q=0.8 Expires: Sun, 15 Jul 1990 00:00:00 GMT Pragma: no-cache Cache-Control: no-cache Content-Language: en ntCoent-Length: 2424 Connection: close Content-Type: text/html;charset=iso-8859-1 Content-Length: 2424

    Read the article

  • Code Golf - p day

    - by gnibbler
    The Challenge The shortest code by character count to display a representation of a circle of radius R using the *character, followed by an approximation of p. Input is a single number, R. Since most computers seem to have almost 2:1 ratio you should only output lines where y is odd. The approximation of p is given by dividing twice the number of * characters by R². The approximation should be correct to at least 6 significant digits. Leading or trailing zeros are permitted, so for example any of 3, 3.000000, 003 is accepted for the inputs of 2 and 4. Code count includes input/output (i.e., full program). Test Cases Input 2 Output *** *** 3.0 Input 4 Output ***** ******* ******* ***** 3.0 Input 8 Output ******* ************* *************** *************** *************** *************** ************* ******* 3.125 Input 10 Output ********* *************** ***************** ******************* ******************* ******************* ******************* ***************** *************** ********* 3.16

    Read the article

  • Sharepoint Web Part OnPreRender PostBack Dynamic TextBoxes Not Accessible In Button Handler

    - by Matt
    I have a custom webpart which extends System.Web.UI.WebControls.WebPart and implements an EditorPart. I have all the static controls being added in the overridden CreateChildControls method as I know this makes them persistent across PostBacks. However, I have some TextBoxes being added in the overridden OnPreRender method because the actual TextBoxes that I add depend upon the data returned from a Web Service which I am calling in OnPreRender. The Web Service has to be called in OnPreRender because I need some of the Property values that are set in the EditorPart. If I build this logic into the CreateChildControls method, obviously the data is not available on first PostBack after an edit is applied because the PostBack events are restored after CreateChildControls. This means the page has to be posted twice before the data is refreshed and that is just cludgy. How can make these TextBoxes along with their entered text values persistent across PostBacks so that I can access them in the button handler? Thanks for any help, Matt

    Read the article

  • Code Golf - PI day

    - by gnibbler
    The Challenge The shortest code by character count to display a representation of a circle of radius R using the *character. Followed by an approximation of pi Input is a single number, R Since most computers seem to have almost 2:1 ratio you should only output lines where y is odd. The approximation of pi is given by dividing the twice the number of * characters by R squared. The approximation should be correct to at least 6 significant digits. Leading or trailing zeros are permitted, so for example any of 3,3.000000,003 is accepted for the inputs of 2 and 4 Code count includes input/output (i.e full program). Test Cases Input 2 Output *** *** 3.0 Input 4 Output ***** ******* ******* ***** 3.0 Input 8 Output ******* ************* *************** *************** *************** *************** ************* ******* 3.125 Input 10 Output ********* *************** ***************** ******************* ******************* ******************* ******************* ***************** *************** ********* 3.16

    Read the article

  • Django QuerySet filter method returns multiple entries for one record

    - by Yaroslav
    Trying to retrieve blogs (see model description below) that contain entries satisfying some criteria: Blog.objects.filter(entries__title__contains='entry') The results is: [<Blog: blog1>, <Blog: blog1>] The same blog object is retrieved twice because of JOIN performed to filter objects on related model. What is the right syntax for filtering only unique objects? Data model: class Blog(models.Model): name = models.CharField(max_length=100) def __unicode__(self): return self.name class Entry(models.Model): title = models.CharField(max_length=100) blog = models.ForeignKey(Blog, related_name='entries') def __unicode__(self): return self.title Sample data: b1 = Blog.objects.create(name='blog1') e1 = Entry.objects.create(title='entry 1', blog=b1) e1 = Entry.objects.create(title='entry 2', blog=b1)

    Read the article

  • Start position for a reused t- sql cursor?

    - by Span
    I'm working on a stored procedure that uses a cursor on a temporary table (I have read a bit about why cursors are undesirable, but in this situation I believe I still need to use one). In my procedure I need to step through the rows of the table twice. Having declared the cursor, already stepped through the temporary table once and closed the cursor, would the position of the cursor remain at the end of the table when re-opened or does it reposition itself to the initial starting position (ie: before the first row)? Alternatively, to reposition the cursor must I do a 'FETCH FIRST' before stepping through again? Am I right to assume the 'cost' of doing this repositioning and reusing the cursor would be less than deallocating and reallocating the cursor?

    Read the article

  • Webkit and Safari fire mousemove even when mouse doesn't move

    - by Roel
    I've read about issues where the mousemove event is fired twice in Safari/Webkit, but the problem I'm facing is that mousemove fires even when the mouse is not moved. That is: it already fires when the mouse cursor is positioned above the context that the event is attached to when the page is loaded/refreshed. And because I'm attaching it to 'document' (entire viewport of the browser), it fires right away in Safari. I've tried to attach it to to html element, to the body and to a wrapper div. No change. $(document).bind('mousemove', function() { alert('Mouse moved!'); $(document).unbind('mousemove'); }); Is does work ok in other browsers. Anyone seeing what I'm doing wrong? Thanks.

    Read the article

  • AJAX binds jquery events multiple times

    - by Dynde
    Hi... I have a masterpage setup, with a pageLoad in the topmost masterpage, which calls pageLoad2 for nested masterpages which calls pageLoad3 for content pages. In my content page I have a jquery click event and in my nested masterpage I have a web user control. Whenever I use the user control in the nested masterpage, it rebinds the click event in the content page (undoubtedly because the pageLoad3 is called again), but this makes the click event fire twice on a single click. The problem gets worse the higher up masterpages you go (eg. fires 3 times if user control from topmost masterpage is called). Can anyone tell me how to make sure it only binds the jquery events once?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >