Search Results

Search found 50 results on 2 pages for 'hao shen'.

Page 2/2 | < Previous Page | 1 2 

  • How to refresh Mono ASP.NET page without restarting the web server?

    - by Hao
    When I make changes to a file, Mono ASP.NET doesn't see my changes, I have to do this: sudo /etc/init.d/apache2 restart I remember hen Mono ASP.NET executes ASP.NET it caches the compilation somewhere. Before, when the updated page doesn't come up, I just delete that cached compiled code. I just forgot the exact path How to make Mono ASP.NET that I have changed the program without restarting the web server?

    Read the article

  • I cannot grok MVC, what it is, and what it is not?

    - by Hao
    I cannot grok what MVC is, what mindset or programming model should I acquire so MVC stuff can instantly "lightbulb" on my head? If not instantly, what simple programs/projects should I try to do first so I can apply the neat things MVC brings to programming. OOP is intuitive and easier, object is all around us, and the benefits of code reuse using OOP-paradigm instantly click to anyone. You can probably talk to anybody about OOP in a few minutes and lecture some examples and they would get it. While OOP somehow raise the intuitiveness aspect of programming, MVC seems to do the opposite. I'm getting negative thoughts that some future employers(or even clients) would look down upon me for not using MVC technology. Though I probably get the skinnable aspect of MVC, but when I try to apply it to my own project, I don't know where to start. And also some programmers even have diverging views on how to accomplish MVC properly. Take this for instance from Jeff's post about MVC: The view is simply how you lay the data out, how it is displayed. If you want a subset of some data, for example, my opinion is that is a responsibility of the model. So maybe some programmers use MVC, but they somehow inadvertently use the View or the Controller to extract a subset of data. Why we can't have a definitive definition of what and how to accomplish MVC properly? And also, when I search for MVC .NET programs, most of it applies to web programs, not desktop apps, this intrigue me further. My guess is, this is most advantageous to web apps, there's not much problem about intermixed view(html) and controller(program code) in desktop apps.

    Read the article

  • regular expression help

    - by hao
    <li class="zk_list_c2 f_l"><a title="abc" target="_blank" href="link"> abc </a>&nbsp;</li> how would i extract abc and link? $pattern="/<li class=\"zk_list_c2 f_l\"><a title=\"(.*)\" target=\"_blank\" href=\"(.*)\">\s*(.*)\s*<\/a>&nbsp;<\/li>/m"; preg_match_all($pattern, $content, $matches); the one i have right now doesnt seems to work

    Read the article

  • manipulating the geocoding webservices results through javascript?

    - by hao
    using http://maps.google.com/maps/api/geocode/json?address=xyz we get a json file I am wondering how can i maniupluate the results in javascript? How do i create the results object in javascript? http://code.google.com/apis/maps/documentation/geocoding/index.html#JSONParsing this doesnt really explain how they get the myJSONResult

    Read the article

  • Eidetic memory: What magic numbers you still remember?

    - by Hao
    Long before you practice writing readable code, what "magic numbers" you still remember up to this day? here's some of my list: 72 80 75 77 13 32 27 - up down left right enter space escape 1 2 4 128 - blue green red blink 67h 33h 17h - interrupt for EMS, mouse, printer function AH 9, interrupt 21 alt+219 for block ASCII alt+164 ñ 90 NOP 13 10 carriage return, line feed ascii 1 and 2 face, ascii 3 heart. no not this heart: <3 :-) debug -o72,10 -o71,12 clears the BIOS password. I don't know what those numbers mean, it's like a trade secret that gets shared with each other during college days. ascii 7 sounds a beep P.S. Somehow, remembering some of these magic numbers can help you in some tech problems, your keyboard is broken, the office pal's keyboard doesn't have accented characters. An anecdote, during college, one of my friend asked me how to remove the newlines in his Word document. Not having used Word so much then, I somehow "intuitively" guessed to find ^013 and replace it with blank. Well it works :-)

    Read the article

  • How to check if JavaScript object is JSON

    - by Wei Hao
    I have a nested JSON object that I need to loop through, and the value of each key could be a String, JSON array or another JSON object. Depending on the type of object, I need to carry out different operations. Is there any way I can check the type of the object to see if it is a String, JSON object or JSON array? I tried using typeof and instanceof but both didn't seem to work, as typeof will return an object for both JSON object and array, and instanceof gives an error when I do obj instanceof JSON. To be more specific, after parsing the JSON into a JS object, is there any way I can check if it is a normal string, or an object with keys and values (from a JSON object), or an array (from a JSON array)? For example: JSON var data = {"hi": {"hello": ["hi1","hi2"] }, "hey":"words" } JavaScript var jsonObj = JSON.parse(data); var level1 = jsonObj.hi; var text = jsonObj.hey; var arr = level1.hello; //how to check if level1 was formerly a JSON object? //how to check if arr was formerly a JSON array? //how to check if text is a string?

    Read the article

  • ASP.NET MVC web hosting that has payment option of paypal?

    - by Hao
    I already check some of asp.net mvc hosting sites listed here: http://stackoverflow.com/questions/637567/affordable-stable-asp-net-mvc-hosting-exist I worry entering credit card number, all of them required credit card number. Do you know which ASP.NET MVC web hosting that has paypal payment option?

    Read the article

  • Retrieivng coordinates in this page

    - by hao
    Hey guys, Im trying to do some data mining and analyze data based on locations. For this site, http://www.dianping.com/shop/1898365 I am trying to figure out whats the latitude and longitude by crawling. But I cant seem to figure out where this information is stored. Can someone give me some pointers

    Read the article

  • Controller should not have domain logic. How faithful should one adhere to this tenet?

    - by Hao
    Quoting from page 49 of Pro ASP.NET MVC book It is certainly possible to put domain logic into a controller, even though you shouldn’t, just because it seems like it will work anyway. It’s easy to avoid this if you imagine that you have multiple UI technologies (e.g., an ASP.NET MVC application plus a native iPhone application) operating on the same underlying business domain layer (and maybe one day you will!). With this in mind, it’s clear that you don’t want to put domain logic into any of the UI layers. Why he seems to contradict himself on page 172? [HttpPost] public ActionResult CheckOut(Cart cart, ShippingDetails shippingDetails) { // Empty carts can't be checked out if (cart.Lines.Count == 0) ModelState.AddModelError("Cart", "Sorry, your cart is empty!"); if (ModelState.IsValid) { orderSubmitter.SubmitOrder(cart, shippingDetails); cart.Clear(); return View("Completed"); } else // Something was invalid return View(shippingDetails); } Related to: How to avoid placing domain logic in controller?

    Read the article

  • Get Chinese Romanization from Google Translate API

    - by krubo
    The Google language translate API works cleanly to translate into Chinese: <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script> google.load('language','1'); function googletrans(text) { google.language.translate(text,'en','zh',function(result) { alert(result.translation); }); } </script> <input onchange="googletrans(this.value);"> Example input: "Hello" Result: "??" My problem is I can't get the Romanization (pronunciation using English letters). This is a known issue. Now the data is right there on translate.google.com (Example input: "Hello" Result: "Ni hao") and I can even see it by pointing my browser to: http://translate.google.com/translate_a/t?client=t&text=hello&hl=en&sl=en&tl=zh-CN&otf=2&pc=0 Result: {"sentences":[{"trans":"??","orig":"hello","translit":"Ni hao"}], "dict":[{"pos":"interjection","terms":["?"]}],"src":"en"} But somehow when I try to get this URL with ajax it fails (XMLHttpRequest Exception 101). Is there any way to retrieve this Romanization data with ajax?

    Read the article

  • Can Near Field Communications (NFC) Benefit your Supply Chain?

    - by Stephen Slade
    Leading firms continue to leverage the latest tools and technologies to drive performance especially around minimizing transaction costs. With razor thin margins in manufacturing and distribution, the leading producers are resorting to Near Field Communications to gain efficiencies.  In this week’s CIO magazine (Apr1, 2012, pg.30, see http://www.cio.com)  Lauren Brousell talks of the things you need to know to make a more informed decision with NFC.  Sandy Shen of Gartner says NFC appeals because "it supports any services that requires data transfer and authentication' 1. NFC is Cheap and Easy - short range transmitting technology connecting smartphones to data transfer. 2. Adoption Seems Inevitable - more merchants will use NCF for payments in the futures. Wallets are becoming obsolete. 3. It's a Hot Potato for Enterprise - Business with credit card companies and cell phone providers are debating who handles the billing process. 4. It's in use Overseas. Japan uses FeliCa to pay by smartphone. In the US, billing agreements are causing territorial conflict. 5. Security Risks Come Standard. As people lose HH devices, security will be an ongoing concern. Credentials and timeout features and alleviate to some extent. My prediction: In 5 years, we won't have wallets in our pockets.  Our secure and all-powerful smart phones will be our electronic portable banks and execute the transaction for us based on our preferences and propensities and electronically execute the transaction for the supply chain.

    Read the article

  • Remove all problematic characters in an intelligent way in C#

    - by J. Pablo Fernández
    Is there any .Net library to remove all problematic characters of a string and only leave alphanumeric, hyphen and underscore (or similar subset) in an intelligent way? This is for using in URLs, file names, etc. I'm looking for something similar to stringex which can do the following: A simple prelude "simple English".to_url = "simple-english" "it's nothing at all".to_url = "its-nothing-at-all" "rock & roll".to_url = "rock-and-roll" Let's show off "$12 worth of Ruby power".to_url = "12-dollars-worth-of-ruby-power" "10% off if you act now".to_url = "10-percent-off-if-you-act-now" You don't even wanna trust Iconv for this next part "kick it en Français".to_url = "kick-it-en-francais" "rock it Español style".to_url = "rock-it-espanol-style" "tell your readers ??".to_url = "tell-your-readers-ni-hao"

    Read the article

  • How to Compile Mod_Python 3.3.1 for Python 2.6 and Apache 2.2 on Windows?

    - by John
    I have no experience compiling code other than using Visual Studio's Build command. I am hoping we can create a step by step guide for compiling mod_python on windows. Please be as descriptive as possible. This is what I've done so far: Download and install python 2.6.2 Download and install apache 2.2.11 Download the most recent source code for mod_python from svn From here I'm lost to what the next step is. I've downloaded Microsoft Visual C++ 2008 Express Edition. As mentioned by Hao I've already tried the tutorial mentioned in that link. Here is the error messages I'm receiving with that tutorial. C:\mod_python\distbuild_installer.bat Could Not Find C:\mod_python\src*.obj running bdist_wininst running build running build_py creating build creating build\lib.win32-2.6 creating build\lib.win32-2.6\mod_python copying C:\mod_python\lib\python\mod_python\apache.py - build\lib.win32-2.6\mod _python copying C:\mod_python\lib\python\mod_python\cache.py - build\lib.win32-2.6\mod_ python copying C:\mod_python\lib\python\mod_python\cgihandler.py - build\lib.win32-2.6 \mod_python copying C:\mod_python\lib\python\mod_python\Cookie.py - build\lib.win32-2.6\mod _python copying C:\mod_python\lib\python\mod_python\importer.py - build\lib.win32-2.6\m od_python copying C:\mod_python\lib\python\mod_python\psp.py - build\lib.win32-2.6\mod_py thon copying C:\mod_python\lib\python\mod_python\publisher.py - build\lib.win32-2.6\ mod_python copying C:\mod_python\lib\python\mod_python\python22.py - build\lib.win32-2.6\m od_python copying C:\mod_python\lib\python\mod_python\Session.py - build\lib.win32-2.6\mo d_python copying C:\mod_python\lib\python\mod_python\testhandler.py - build\lib.win32-2. 6\mod_python copying C:\mod_python\lib\python\mod_python\util.py - build\lib.win32-2.6\mod_p ython copying C:\mod_python\lib\python\mod_python__init__.py - build\lib.win32-2.6\m od_python running build_ext building 'mod_python_so' extension creating build\temp.win32-2.6 creating build\temp.win32-2.6\Release creating build\temp.win32-2.6\Release\mod_python creating build\temp.win32-2.6\Release\mod_python\src C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W 3 /GS- /DNDEBUG -DWIN32 -DNDEBUG -D_WINDOWS -IC:\mod_python\src\include -Ic:\apa che\include -IC:\Python26\include -IC:\Python26\PC /TcC:\mod_python\src\mod_pyth on.c /Fobuild\temp.win32-2.6\Release\mod_python\src\mod_python.obj mod_python.c c:\apache\include\ap_config.h(25) : fatal error C1083: Cannot open include file: 'apr.h': No such file or directory error: command '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe"' fa iled with exit status 2

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

< Previous Page | 1 2