Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 160/595 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Monday at Oracle OpenWorld 2012 - Must See Session: “Using the Right Tools, Techniques, and Technologies for Integration Projects”

    - by Lionel Dubreuil
    Don’t miss this “CON8669 - Using the Right Tools, Techniques, and Technologies for Integration Projects“ session with Timothy Hall - Sr. Director, Oracle: Date: Monday, Oct 1, Time: 3:15 PM - 4:15 PM Location: Moscone South - 308 Every integration project brings its own unique set of challenges. There are many tools and techniques to choose from. How do you ensure that you have a means of consistently and repeatedly making decisions about which tools, techniques, and technologies are used? In working with many customers around the globe, Oracle has developed a set of criteria to help evaluate a variety of common integration questions. This session explores these criteria and how they have been further organized into decision trees that offer a repeatable means for ensuring that project teams are given the same guidance from project to project. Using these techniques, the presentation shows how you can reduce risk and speed productivity for your projects Objectives for this session are to: Discuss common questions that arise at the start of integration projects Review various decision criteria and approaches for getting to a consistent set of answers Explore how these techniques can be used to reduce risk and speed productivity Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • colliding btRigidBody objects behave strangely when moving slowly

    - by Piku
    I'm trying to use Bullet Physics in my iOS game. The engine appears to be correctly compiled in that the demos work fine. In my game I have the player's ship and some enemy ships. They're defined as btRigidBody objects and btCollisionObjects and I'm using btSphereShapes for collision. At 'fast' speeds, collisions appear to happen sensibly - things collide and nothing goes 'weird'. If the speeds are very slow though and the player's ship touches a non-moving object the collision happens, but then the player's ship moves at incredible speed over the next few frames and appears a long distance from where it collided - completely out of proportion to the speed it was moving before impact. To move the things around I'm using setLinearVelocity() each frame, ticking the physics engine, then using getMotionState() to update the rendering code I have. Part of the issue might be I don't quite understand how to set the correct mass or what the best speeds are to use for anything. I'm mostly sticking numbers in and seeing what happens. Should I be using Bullet in this way, and are there any guidelines for deciding on the mass of objects? (am I right in assuming that in collisions heavier objects will force lighter objects to move more)

    Read the article

  • Scaling sprite velocity / co-ordinatesin Android

    - by user22241
    I'm trying to find the answer to a question that I've had for a long time, but am having trouble finding it! I hope someone can help :-) I'm trying to find information on how to scale sprite velocity / movement / co-ordinates. What I mean by this is how do I get a sprite to move at the same speed relative to the screen size / DPI so that it takes the same amount of real-time to get from one side of the screen to the other? All of the posts pertaining to sprite scaling that I can find on the various forums relate to the size of the sprite, but this part of it I'm OK with so far, it's just that when I move a sprite, it kind of gets there at different speed depending on the dpi / resolution of the device. I hope I'm making sense. This is the code I have so far, instead of using explicit amounts, like 1, I'm using something like the following: platSpeedFloat= (1 * (dpi/160)); //Use '1' so on an MDPI screen, the sprite will move by 1 physical pixel Then basically what I'm doing is something like this: (all varialble previously declared) platSpeedSave+=platSpeedFloat; //Add the platSpeedFloat value to the current platSpeedSave value platSpeed=(int) platSpeedSave; //Cast to int so it can be checked in the following statement if (platSpeed==platSpeedSave) //Check the casted int value to float value stored previoiusly {floorY=floorY-platSpeed; //If they match then change the Y value platSpeedSave=0;} //Reset Would be grateful if someone could assists - hope I'm making sense. The above doesn't seems to work the sprite moves 'faster' on lower DPI screens. Thanks

    Read the article

  • Memcached Lagging

    - by Brad Dwyer
    Let me preface this by saying that this is a followup question to this topic. That was "solved" by switching from Solaris (SmartOS) to Ubuntu for the memcached server. Now we've multiplied load by about 5x and are running into problems again. We are running a site that is doing about 1000 requests/minute, each request hits Memcached with approximately 3 reads and 1 write. So load is approximately 65 requests per second. Total data in the cache is about 37M, and each key contains a very small amount of data (a JSON-encoded array of integers amounting to less than 1K). We have setup a benchmarking script on these pages and fed the data into StatsD for logging. The problem is that there are spikes where Memcached takes a very long time to respond. These do not appear to correlate with spikes in traffic. What could be causing these spikes? Why would memcached take over a second to reply? We just booted up a second server to put in the pool and it didn't make any noticeable difference in the frequency or severity of the spikes. This is the output of getStats() on the servers: Array ( [-----------] => Array ( [pid] => 1364 [uptime] => 3715684 [threads] => 4 [time] => 1336596719 [pointer_size] => 64 [rusage_user_seconds] => 7924 [rusage_user_microseconds] => 170000 [rusage_system_seconds] => 187214 [rusage_system_microseconds] => 190000 [curr_items] => 12578 [total_items] => 53516300 [limit_maxbytes] => 943718400 [curr_connections] => 14 [total_connections] => 72550117 [connection_structures] => 165 [bytes] => 2616068 [cmd_get] => 450388258 [cmd_set] => 53493365 [get_hits] => 450388258 [get_misses] => 2244297 [evictions] => 0 [bytes_read] => 2138744916 [bytes_written] => 745275216 [version] => 1.4.2 ) [-----------:11211] => Array ( [pid] => 8099 [uptime] => 4687 [threads] => 4 [time] => 1336596719 [pointer_size] => 64 [rusage_user_seconds] => 7 [rusage_user_microseconds] => 170000 [rusage_system_seconds] => 290 [rusage_system_microseconds] => 990000 [curr_items] => 2384 [total_items] => 225964 [limit_maxbytes] => 943718400 [curr_connections] => 7 [total_connections] => 588097 [connection_structures] => 91 [bytes] => 562641 [cmd_get] => 1012562 [cmd_set] => 225778 [get_hits] => 1012562 [get_misses] => 125161 [evictions] => 0 [bytes_read] => 91270698 [bytes_written] => 350071516 [version] => 1.4.2 ) ) Edit: Here is the result of a set and retrieve of 10,000 values. Normal: Stored 10000 values in 5.6118 seconds. Average: 0.0006 High: 0.1958 Low: 0.0003 Fetched 10000 values in 5.1215 seconds. Average: 0.0005 High: 0.0141 Low: 0.0003 When Spiking: Stored 10000 values in 16.5074 seconds. Average: 0.0017 High: 0.9288 Low: 0.0003 Fetched 10000 values in 19.8771 seconds. Average: 0.0020 High: 0.9478 Low: 0.0003

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Metro: Understanding CSS Media Queries

    - by Stephen.Walther
    If you are building a Metro style application then your application needs to look great when used on a wide variety of devices. Your application needs to work on tiny little phones, slates, desktop monitors, and the super high resolution displays of the future. Your application also must support portable devices used with different orientations. If someone tilts their phone from portrait to landscape mode then your application must still be usable. Finally, your Metro style application must look great in different states. For example, your Metro application can be in a “snapped state” when it is shrunk so it can share screen real estate with another application. In this blog post, you learn how to use Cascading Style Sheet media queries to support different devices, different device orientations, and different application states. First, you are provided with an overview of the W3C Media Query recommendation and you learn how to detect standard media features. Next, you learn about the Microsoft extensions to media queries which are supported in Metro style applications. For example, you learn how to use the –ms-view-state feature to detect whether an application is in a “snapped state” or “fill state”. Finally, you learn how to programmatically detect the features of a device and the state of an application. You learn how to use the msMatchMedia() method to execute a media query with JavaScript. Using CSS Media Queries Media queries enable you to apply different styles depending on the features of a device. Media queries are not only supported by Metro style applications, most modern web browsers now support media queries including Google Chrome 4+, Mozilla Firefox 3.5+, Apple Safari 4+, and Microsoft Internet Explorer 9+. Loading Different Style Sheets with Media Queries Imagine, for example, that you want to display different content depending on the horizontal resolution of a device. In that case, you can load different style sheets optimized for different sized devices. Consider the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>U.S. Robotics and Mechanical Men</title> <link href="main.css" rel="stylesheet" type="text/css" /> <!-- Less than 1100px --> <link href="medium.css" rel="stylesheet" type="text/css" media="(max-width:1100px)" /> <!-- Less than 800px --> <link href="small.css" rel="stylesheet" type="text/css" media="(max-width:800px)" /> </head> <body> <div id="header"> <h1>U.S. Robotics and Mechanical Men</h1> </div> <!-- Advertisement Column --> <div id="leftColumn"> <img src="advertisement1.gif" alt="advertisement" /> <img src="advertisement2.jpg" alt="advertisement" /> </div> <!-- Product Search Form --> <div id="mainContentColumn"> <label>Search Products</label> <input id="search" /><button>Search</button> </div> <!-- Deal of the Day Column --> <div id="rightColumn"> <h1>Deal of the Day!</h1> <p> Buy two cameras and get a third camera for free! Offer is good for today only. </p> </div> </body> </html> The HTML page above contains three columns: a leftColumn, mainContentColumn, and rightColumn. When the page is displayed on a low resolution device, such as a phone, only the mainContentColumn appears: When the page is displayed in a medium resolution device, such as a slate, both the leftColumn and the mainContentColumns are displayed: Finally, when the page is displayed in a high-resolution device, such as a computer monitor, all three columns are displayed: Different content is displayed with the help of media queries. The page above contains three style sheet links. Two of the style links include a media attribute: <link href="main.css" rel="stylesheet" type="text/css" /> <!-- Less than 1100px --> <link href="medium.css" rel="stylesheet" type="text/css" media="(max-width:1100px)" /> <!-- Less than 800px --> <link href="small.css" rel="stylesheet" type="text/css" media="(max-width:800px)" /> The main.css style sheet contains default styles for the elements in the page. The medium.css style sheet is applied when the page width is less than 1100px. This style sheet hides the rightColumn and changes the page background color to lime: html { background-color: lime; } #rightColumn { display:none; } Finally, the small.css style sheet is loaded when the page width is less than 800px. This style sheet hides the leftColumn and changes the page background color to red: html { background-color: red; } #leftColumn { display:none; } The different style sheets are applied as you stretch and contract your browser window. You don’t need to refresh the page after changing the size of the page for a media query to be applied: Using the @media Rule You don’t need to divide your styles into separate files to take advantage of media queries. You can group styles by using the @media rule. For example, the following HTML page contains one set of styles which are applied when a device’s orientation is portrait and another set of styles when a device’s orientation is landscape: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Application1</title> <style type="text/css"> html { font-family:'Segoe UI Semilight'; font-size: xx-large; } @media screen and (orientation:landscape) { html { background-color: lime; } p.content { width: 50%; margin: auto; } } @media screen and (orientation:portrait) { html { background-color: red; } p.content { width: 90%; margin: auto; } } </style> </head> <body> <p class="content"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </p> </body> </html> When a device has a landscape orientation then the background color is set to the color lime and the text only takes up 50% of the available horizontal space: When the device has a portrait orientation then the background color is red and the text takes up 90% of the available horizontal space: Using Standard CSS Media Features The official list of standard media features is contained in the W3C CSS Media Query recommendation located here: http://www.w3.org/TR/css3-mediaqueries/ Here is the official list of the 13 media features described in the standard: · width – The current width of the viewport · height – The current height of the viewport · device-width – The width of the device · device-height – The height of the device · orientation – The value portrait or landscape · aspect-ratio – The ratio of width to height · device-aspect-ratio – The ratio of device width to device height · color – The number of bits per color supported by the device · color-index – The number of colors in the color lookup table of the device · monochrome – The number of bits in the monochrome frame buffer · resolution – The density of the pixels supported by the device · scan – The values progressive or interlace (used for TVs) · grid – The values 0 or 1 which indicate whether the device supports a grid or a bitmap Many of the media features in the list above support the min- and max- prefix. For example, you can test for the min-width using a query like this: (min-width:800px) You can use the logical and operator with media queries when you need to check whether a device supports more than one feature. For example, the following query returns true only when the width of the device is between 800 and 1,200 pixels: (min-width:800px) and (max-width:1200px) Finally, you can use the different media types – all, braille, embossed, handheld, print, projection, screen, speech, tty, tv — with a media query. For example, the following media query only applies to a page when a page is being printed in color: print and (color) If you don’t specify a media type then media type all is assumed. Using Metro Style Media Features Microsoft has extended the standard list of media features which you can include in a media query with two custom media features: · -ms-high-contrast – The values any, black-white, white-black · -ms-view-state – The values full-screen, fill, snapped, device-portrait You can take advantage of the –ms-high-contrast media feature to make your web application more accessible to individuals with disabilities. In high contrast mode, you should make your application easier to use for individuals with vision disabilities. The –ms-view-state media feature enables you to detect the state of an application. For example, when an application is snapped, the application only occupies part of the available screen real estate. The snapped application appears on the left or right side of the screen and the rest of the screen real estate is dominated by the fill application (Metro style applications can only be snapped on devices with a horizontal resolution of greater than 1,366 pixels). Here is a page which contains style rules for an application in both a snap and fill application state: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>MyWinWebApp</title> <style type="text/css"> html { font-family:'Segoe UI Semilight'; font-size: xx-large; } @media screen and (-ms-view-state:snapped) { html { background-color: lime; } } @media screen and (-ms-view-state:fill) { html { background-color: red; } } </style> </head> <body> <p class="content"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </p> </body> </html> When the application is snapped, the application appears with a lime background color: When the application state is fill then the background color changes to red: When the application takes up the entire screen real estate – it is not in snapped or fill state – then no special style rules apply and the application appears with a white background color. Querying Media Features with JavaScript You can perform media queries using JavaScript by taking advantage of the window.msMatchMedia() method. This method returns a MSMediaQueryList which has a matches method that represents success or failure. For example, the following code checks whether the current device is in portrait mode: if (window.msMatchMedia("(orientation:portrait)").matches) { console.log("portrait"); } else { console.log("landscape"); } If the matches property returns true, then the device is in portrait mode and the message “portrait” is written to the Visual Studio JavaScript Console window. Otherwise, the message “landscape” is written to the JavaScript Console window. You can create an event listener which triggers code whenever the results of a media query changes. For example, the following code writes a message to the JavaScript Console whenever the current device is switched into or out of Portrait mode: window.msMatchMedia("(orientation:portrait)").addListener(function (mql) { if (mql.matches) { console.log("Switched to portrait"); } }); Be aware that the event listener is triggered whenever the result of the media query changes. So the event listener is triggered both when you switch from landscape to portrait and when you switch from portrait to landscape. For this reason, you need to verify that the matches property has the value true before writing the message. Summary The goal of this blog entry was to explain how CSS media queries work in the context of a Metro style application written with JavaScript. First, you were provided with an overview of the W3C CSS Media Query recommendation. You learned about the standard media features which you can query such as width and orientation. Next, we focused on the Microsoft extensions to media queries. You learned how to use –ms-view-state to detect whether a Metro style application is in “snapped” or “fill” state. You also learned how to use the msMatchMedia() method to perform a media query from JavaScript.

    Read the article

  • The Oracle Enterprise Linux Software and Hardware Ecosystem

    - by sergio.leunissen
    It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux. As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing. Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities. The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via [email protected] Chip Manufacturers Intel, Intel Enabled Server Acceleration Alliance AMD Server vendors Cisco Unified Computing System Dawning Dell Egenera Fujitsu HP Huawei IBM NEC Sun/Oracle Storage Systems, Volume Management and File Systems 3Par Compellent EMC VPLEX FalconStor Fusion-io Hitachi Data Systems HP Storage Array Systems Lustre Network Appliance OCFS2 PillarData Symantec Veritas Storage Foundation Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand Brocade Emulex Mellanox QLogic Voltaire SOA and Middleware ActiveState ActivePerl, ActivePython Tibco Zend Backup, Recovery & Replication Arkeia Network Backup Suite BakBone NetVault CommVault Simpana 8 EMC Networker, Replication Manager FalconStor Continuous Data Protector HP Data Protector NetApp Snapmanager Quest LiteSpeed Engine Steeleye Data Replication, Disaster Recovery Symantec NetBackup, Veritas Volume Replicator, Symantec Backup Exec Zmanda Amanda Enterprise Data Center Automation BMC CA Unicenter HP Server Automation (formerly Opsware), System Management Homepage Oracle Enterprise Manager Ops Center Quest Vizioncore vFoglight Pro TeamQuest Manager Clustering & High Availability FUJITSU x10sure NEC Express Cluster X Steeleye Lifekeeper Symantec Cluster Server Univa UniCluster Virtualization Platforms and Cloud Providers Amazon EC2 Citrix XenServer Rackspace Cloud VirtualBox VMWare ESX Security Management ArcSight: Enterprise Security Manager, Logger CA Access Control Centrify Suite Ecora Auditor FoxT Manager Likewise: Unix Account Management Lumension Endpoint Management and Security Suite QualysGuard Suite Quest Privilege Manager McAfee Application Control, Change ControlIntegrity Monitor, Integrity Control, PCI Pro Solidcore S3 Symantec Enterprise Security Manager (ESM) Tripwire Trusted Computer Solutions

    Read the article

  • Migrating from IBM AIX/DB2 Power systems to Oracle Technologies

    - by zeynep.koch(at)oracle.com
    If you are planning to migrate from  IBM DB2 on AIX Power Systems to more open and better-performing computing environment--one that offers enhanced flexibility, clustering, availability, and security, as well as lower maintenance than download this guide that outlines migrating to Oracle Database 11g and Oracle Linux running on Oracle's Sun Fire X4800 server.This guide shows you how to:Move sample applications with an IBM DB2 on an IBM Power System to Oracle Database 11g Release 2Install Oracle Linux and Oracle Database Release 2 on the Oracle's Sun Fire X4800 serverMigrate user databases from the IBM Power System to Oracle's Sun Fire X4800 serverDownload

    Read the article

  • Make Your 64 bit Computer Look like a Commodore 64

    - by Matthew Guay
    The Commodore 64 was one of the bestselling home computers ever, and many geeks got their first computing experience on one of these early personal computers. Here’s an easy way to revisit the early years of personal computing with a theme for Windows 7. With only 64Kb of ram and an 8 bit processor, the Commodore 64 is light-years behind today’s computers.  But with a Windows 7 themepack, you can turn back the years and give your computer a quick overhaul to look more like its ancient predecessor. Age Windows 7 with a click Download the Commodore 64 theme from PC World (link below), and unzip the files. Now, double-click on the Themepack file to apply the theme. This will open your Personalization panel and will automatically change your system fonts, window style, background, and more. Your desktop will go from your Windows 7 look… to a modified Windows 7 look that is reminiscent of the Commodore 64. Open an application to see all the changes … notice the old-style font in the Window boarder and menus. This theme also changes your Computer, Recycle Bin, and User folder icons to Commodore 64-inspired icons. And, if you want to go back to the standard Windows 7 look and feel, it’s only a click away in the Personalization dialog.  Right-click on your desktop, select Personalize, and then choose the theme you want.   Conclusion Although this doesn’t give you the real look and feel of the Commodore 64, it is still a fun way to experience a bit of computer nostalgia.  There are tons of excellent themes available for Windows 7, so check back for more exciting ways to customize your desktop! Link Download the Commodore 64 theme for Windows 7 Similar Articles Productive Geek Tips Make MSE Create a Restore Point Before Cleaning MalwareMake Ubuntu Automatically Save Changes to Your SessionMake Windows Vista Shut Down Services QuickerChange Your Computer Name in Windows 7 or VistaMake Windows 7 or Vista Log On Automatically TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Dark Side of the Moon (8-bit) Norwegian Life If Web Browsers Were Modes of Transportation Google Translate (for animals) Out of 100 Tweeters Roadkill’s Scan Port scans for open ports

    Read the article

  • Interesting articles and blogs on SPARC T4

    - by mv
    Interesting articles and blogs on SPARC T4 processor   I have consolidated all the interesting information I could get on SPARC T4 processor and its hardware cryptographic capabilities.  Hope its useful. 1. Advantages of SPARC T4 processor  Most important points in this T4 announcement are : "The SPARC T4 processor was designed from the ground up for high speed security and has a cryptographic stream processing unit (SPU) integrated directly into each processor core. These accelerators support 16 industry standard security ciphers and enable high speed encryption at rates 3 to 5 times that of competing processors. By integrating encryption capabilities directly inside the instruction pipeline, the SPARC T4 processor eliminates the performance and cost barriers typically associated with secure computing and makes it possible to deliver high security levels without impacting the user experience." Data Sheet has more details on these  : "New on-chip Encryption Instruction Accelerators with direct non-privileged support for 16 industry-standard cryptographic algorithms plus random number generation in each of the eight cores: AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, Kasumi, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512" I ran "isainfo -v" command on Solaris 11 Sparc T4-1 system. It shows the new instructions as expected  : $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32  2.  Dan Anderson's Blog have some interesting points about how these can be used : "New T4 crypto instructions include: aes_kexpand0, aes_kexpand1, aes_kexpand2,         aes_eround01, aes_eround23, aes_eround01_l, aes_eround_23_l, aes_dround01, aes_dround23, aes_dround01_l, aes_dround_23_l.       Having SPARC T4 hardware crypto instructions is all well and good, but how do we access it ?      The software is available with Solaris 11 and is used automatically if you are running Solaris a SPARC T4.  It is used internally in the kernel through kernel crypto modules.  It is available in user space through the PKCS#11 library." 3.   Dans' Blog on Where's the Crypto Libraries? Although this was written in 2009 but still is very useful  "Here's a brief tour of the major crypto libraries shown in the digraph:   The libpkcs11 library contains the PKCS#11 API (C_\*() functions, such as C_Initialize()). That in turn calls library pkcs11_softtoken or pkcs11_kernel, for userland or kernel crypto providers. The latter is used mostly for hardware-assisted cryptography (such as n2cp for Niagara2 SPARC processors), as that is performed more efficiently in kernel space with the "kCF" module (Kernel Crypto Framework). Additionally, for Solaris 10, strong crypto algorithms were split off in separate libraries, pkcs11_softtoken_extra libcryptoutil contains low-level utility functions to help implement cryptography. libsoftcrypto (OpenSolaris and Solaris Nevada only) implements several symmetric-key crypto algorithms in software, such as AES, RC4, and DES3, and the bignum library (used for RSA). libmd implements MD5, SHA, and SHA2 message digest algorithms" 4. Difference in T3 and T4 Diagram in this blog is good and self explanatory. Jeff's blog also highlights the differences  "The T4 servers have improved crypto acceleration, described at https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine. It is "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". Every physical or virtual CPU on a SPARC-T4 has full access to hardware based crypto acceleration at all times. .... For completeness sake, it's worth noting that the T4 adds more crypto algorithms, and accelerates Camelia, CRC32c, and more SHA-x." 5. About performance counters In this blog, performance counters are explained : "Note that unlike T3 and before, T4 crypto doesn't require kernel modules like ncp or n2cp, there is no visibility of crypto hardware with kstats or cryptoadm. T4 does provide hardware counters for crypto operations.  You can see these using cpustat: cpustat -c pic0=Instr_FGU_crypto 5 You can check the general crypto support of the hardware and OS with the command "isainfo -v". Since T4 crypto's implementation now allows direct userland access, there are no "crypto units" visible to cryptoadm.  " For more details refer Martin's blog as well. 6. How to turn off  SPARC T4 or Intel AES-NI crypto acceleration  I found this interesting blog from Darren about how to turn off  SPARC T4 or Intel AES-NI crypto acceleration. "One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.   The alternate to this is having the application coded to call getisax(2) system call and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so and libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  For SPARC T4 : export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" .. For Intel systems with AES-NI support: export LD_HWCAP="-aes"" Note that LD_HWCAP is explained in  http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html "LD_HWCAP, LD_HWCAP_32, and LD_HWCAP_64 -  Identifies an alternative hardware capabilities value... A “-” prefix results in the capabilities that follow being removed from the alternative capabilities." 7. Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing This Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing explains more details.  It has DTrace scripts which may come in handy : "To ensure the hardware-assisted cryptographic acceleration is configured to use and working with the security scenarios, it is recommended to use the following Solaris DTrace script. #!/usr/sbin/dtrace -s pid$1:libsoftcrypto:yf*:entry, pid$target:libsoftcrypto:rsa*:entry, pid$1:libmd:yf*:entry { @[probefunc] = count(); } tick-1sec { printa(@ops); trunc(@ops); }" Note that I have slightly modified the D Script to have RSA "libsoftcrypto:rsa*:entry" as well as per recommendations from Chi-Chang Lin. 8. References http://www.oracle.com/us/corporate/features/sparc-t4-announcement-494846.html http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-1-ds-487858.pdf https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine https://blogs.oracle.com/DanX/entry/where_s_the_crypto_libraries https://blogs.oracle.com/darren/entry/howto_turn_off_sparc_t4 http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html   https://blogs.oracle.com/hardware/entry/unleash_the_power_of_cryptography https://blogs.oracle.com/cmt/entry/t4_crypto_cheat_sheet https://blogs.oracle.com/martinm/entry/t4_performance_counters_explained  https://blogs.oracle.com/jsavit/entry/no_mau_required_on_a http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-business-wp-524472.pdf

    Read the article

  • Deciding which technology to use is a big decision when no technology is an obvious choice

    Deciding which technology to use in a new venture or project is a big decision for any company when no technology is an obvious choice. It is always best to analyze the current requirements of the project, and also evaluate the existing technology climate so that the correct technology based on the situation at the time is selected. When evaluation the requirements of a new project it is best to be open to as many technologies as possible initially so a company can be sure that the right decision gets made. Another important aspect of the technology decision is what can the current network and  hardware environment handle, and what would be needed to be adjusted if a specific technology was selected. For example if the current network operating system is Linux then VB6 would force  a huge change in the current computing environment. However if the current network operation system was windows based then very little change would be needed to allow for VB6 if any change had to be done at all. Finally and most importantly an analysis should be done regarding the current technical employees pertaining to their skills and aspirations. For example if you have a team of Java programmers then forcing them to build something in C# might not be an ideal situation. However having a team of VB.net developers who want to develop something in C# would be a better situation based on this example because they are already failure with the .Net Framework and have a desire to use the new technology. In addition to this analysis the cost associated with building and maintaining the project is also a key factor. If two languages are ideal for a project but one technology will increase the budget or timeline by 50% then it might not be the best choice in that situation. An ideal situation for developing in C# applications would be a project that is built on existing Microsoft technologies. An example of this would be a company who uses Windows 2008 Server as their network operating system, Windows XP Pro as their main operation system, Microsoft SQL Server 2008 as their primary database, and has a team of developers experience in the .net framework. In the above situation Java would be a poor technology decision based on their current computing environment and potential lack of Java development by the company’s developers. It would take the developers longer to develop the application due the fact that they would have to first learn the language and then become comfortable with the language. Although these barriers do exist, it does not mean that it is not due able if the company and developers were committed to the project.

    Read the article

  • Visual Studio 2010 released!

    - by Daniel Moth
    Visual Studio 2010 releases to the word today. Get the full story from Soma's blog post (inc. links for buy, try etc). Our team is very proud of what we have contributed to this release and you can learn more about it through our content on the Parallel Computing MSDN home. Comments about this post welcome at the original blog.

    Read the article

  • SOA, Cloud and Service Technology Symposium a super success!

    - by JuergenKress
    SOA, Cloud and Service Technology Symposium in London was a huge success. More than 600 international attendees participated in it. Our SOA & BPM Community had a great presence there. At joint booth with the Specialized partners link consulting, eProseed and Griffiths Waite, we presented the latest product updates and had many interesting discussions with customers and speakers. Special thanks to our HQ product management team Demed, Tim, Manas for coming over right before OOW. Also a very big Thank to Matthias Ziegler from Accenture for presenting our joint presentation individually! If you missed the conference here are the key presentations links for your reference: Big Data and its impact on SOA Demed L'Her [View PDF] Building 21st Century Service-Oriented Airports Shyam Kumar [View PDF] Building Cloudy Services Anne Thomas Manes [View PDF] Community Management: The Next Wave of SOA Governance and API Management Tim E. Hall [View PDF] Elastic SOA in the Cloud Steve Millidge [View PDF] Governing Shared Services: On-Premise & In the Cloud Thomas Erl [View Video] Introducing the Cloud Computing Design Patterns Catalogue Thomas Erl and Amin Naserpour [CloudPatterns.org] Lost in Translation - Common Mistakes Interpreting Patterns Mark Simpson [View PDF] Moving Applications to the Cloud: Migration Options Anne Thomas Manes [View PDF] New Paradigms for Application Architecture: From Applications to IT Services Anne Thomas Manes [View PDF] NoSQL for Data Services, Data Virtualization & Big Data Guido Schmutz [View PDF] A Pragmatic Approach to Cloud Computing Andrea Morena [View PDF] The Successful Execution of the SOA and BPM Vision Using a Business Capability Framework: Concepts and Examples Clemens Utschig and Manas Deb [View PDF] Service Modeling & BPM Business Value Patterns Matthias Ziegler [View PDF] [Podcast] SOA Adoption in the Brazilian Ministry of Health - Case Study Ricardo Puttini and Andre Toffanello [PDF Coming Soon] SOA Environments are a Big Data Problem Markus Zirn, Splunk and Maciej Barcz [View PDF] SOA Governance at EDP: A Global Energy Company Manuel Rosa [View PDF] For all presentations please visit the SOA, Cloud and Service Technology Symposium Website SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Spatial data in the UK

    - by simonsabin
    I am just loving the fact that the Ordance Survey has now released a huge amount of data that can be used freely. I’ve downloaded the Panorama (tm) data http://www.ordnancesurvey.co.uk/oswebsite/products/land-form-panorama-contours/index.html . which is all the contours for the UK This I’ve loaded into SQL Server using Safe Computing’s FME ( http://www.safe.com/ ). This is because the data is a Autocad DXF file and translating that to SQL Server spatial data is not easy. The FME workbench is not...(read more)

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Architects overcoming challenges in the cloud

    - by stephen.g.bennett
    Computerworld has released an article based on an Silver Clouds, Dark Linings : A Concise Guide to Cloud Computing. This exceprt is from the roadmap chapter of the book. The book highlights common techniques in building roadmaps such as current reality, future vision, gap analysis, roadmap but also goes into detail in identifying the type of organization you are and what the common challenges you will need to address within your roadmap. In addition over at ArchBeat they have released a four part interview dicussing the book. Have a happy holiday

    Read the article

  • Comparison of Community Linux Distributions for the Enterprise

    <b>Wazi:</b> "Looking for ways to save money on your computing infrastructure? Heard about Linux uptime but need to do more research? You're not alone. Community Linux distros have become increasingly popular within the enterprise as organizations look to cut costs without compromising on functionality and reliability."

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • MySQL – Scalability on Amazon RDS: Scale out to multiple RDS instances

    - by Pinal Dave
    Today, I’d like to discuss getting better MySQL scalability on Amazon RDS. The question of the day: “What can you do when a MySQL database needs to scale write-intensive workloads beyond the capabilities of the largest available machine on Amazon RDS?” Let’s take a look. In a typical EC2/RDS set-up, users connect to app servers from their mobile devices and tablets, computers, browsers, etc.  Then app servers connect to an RDS instance (web/cloud services) and in some cases they might leverage some read-only replicas.   Figure 1. A typical RDS instance is a single-instance database, with read replicas.  This is not very good at handling high write-based throughput. As your application becomes more popular you can expect an increasing number of users, more transactions, and more accumulated data.  User interactions can become more challenging as the application adds more sophisticated capabilities. The result of all this positive activity: your MySQL database will inevitably begin to experience scalability pressures. What can you do? Broadly speaking, there are four options available to improve MySQL scalability on RDS. 1. Larger RDS Instances – If you’re not already using the maximum available RDS instance, you can always scale up – to larger hardware.  Bigger CPUs, more compute power, more memory et cetera. But the largest available RDS instance is still limited.  And they get expensive. “High-Memory Quadruple Extra Large DB Instance”: 68 GB of memory 26 ECUs (8 virtual cores with 3.25 ECUs each) 64-bit platform High I/O Capacity Provisioned IOPS Optimized: 1000Mbps 2. Provisioned IOPs – You can get provisioned IOPs and higher throughput on the I/O level. However, there is a hard limit with a maximum instance size and maximum number of provisioned IOPs you can buy from Amazon and you simply cannot scale beyond these hardware specifications. 3. Leverage Read Replicas – If your application permits, you can leverage read replicas to offload some reads from the master databases. But there are a limited number of replicas you can utilize and Amazon generally requires some modifications to your existing application. And read-replicas don’t help with write-intensive applications. 4. Multiple Database Instances – Amazon offers a fourth option: “You can implement partitioning,thereby spreading your data across multiple database Instances” (Link) However, Amazon does not offer any guidance or facilities to help you with this. “Multiple database instances” is not an RDS feature.  And Amazon doesn’t explain how to implement this idea. In fact, when asked, this is the response on an Amazon forum: Q: Is there any documents that describe the partition DB across multiple RDS? I need to use DB with more 1TB but exist a limitation during the create process, but I read in the any FAQ that you need to partition database, but I don’t find any documents that describe it. A: “DB partitioning/sharding is not an official feature of Amazon RDS or MySQL, but a technique to scale out database by using multiple database instances. The appropriate way to split data depends on the characteristics of the application or data set. Therefore, there is no concrete and specific guidance.” So now what? The answer is to scale out with ScaleBase. Amazon RDS with ScaleBase: What you get – MySQL Scalability! ScaleBase is specifically designed to scale out a single MySQL RDS instance into multiple MySQL instances. Critically, this is accomplished with no changes to your application code.  Your application continues to “see” one database.   ScaleBase does all the work of managing and enforcing an optimized data distribution policy to create multiple MySQL instances. With ScaleBase, data distribution, transactions, concurrency control, and two-phase commit are all 100% transparent and 100% ACID-compliant, so applications, services and tooling continue to interact with your distributed RDS as if it were a single MySQL instance. The result: now you can cost-effectively leverage multiple MySQL RDS instance to scale out write-intensive workloads to an unlimited number of users, transactions, and data. Amazon RDS with ScaleBase: What you keep – Everything! And how does this change your Amazon environment? 1. Keep your application, unchanged – There is no change your application development life-cycle at all.  You still use your existing development tools, frameworks and libraries.  Application quality assurance and testing cycles stay the same. And, critically, you stay with an ACID-compliant MySQL environment. 2. Keep your RDS value-added services – The value-added services that you rely on are all still available. Amazon will continue to handle database maintenance and updates for you. You can still leverage High Availability via Multi A-Z.  And, if it benefits youra application throughput, you can still use read replicas. 3. Keep your RDS administration – Finally the RDS monitoring and provisioning tools you rely on still work as they did before. With your one large MySQL instance, now split into multiple instances, you can actually use less expensive, smallersmaller available RDS hardware and continue to see better database performance. Conclusion Amazon RDS is a tremendous service, but it doesn’t offer solutions to scale beyond a single MySQL instance. Larger RDS instances get more expensive.  And when you max-out on the available hardware, you’re stuck.  Amazon recommends scaling out your single instance into multiple instances for transaction-intensive apps, but offers no services or guidance to help you. This is where ScaleBase comes in to save the day. It gives you a simple and effective way to create multiple MySQL RDS instances, while removing all the complexities typically caused by “DIY” sharding andwith no changes to your applications . With ScaleBase you continue to leverage the AWS/RDS ecosystem: commodity hardware and value added services like read replicas, multi A-Z, maintenance/updates and administration with monitoring tools and provisioning. SCALEBASE ON AMAZON If you’re curious to try ScaleBase on Amazon, it can be found here – Download NOW. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • CodePlex Daily Summary for Saturday, April 03, 2010

    CodePlex Daily Summary for Saturday, April 03, 2010New ProjectsASP.NET MVC Demo: aspnetmvcdemoClasslessInterDomainRouting: ClasslessInterDomainRouting provides a class that is designed to detail with CIDR requests and ranges, it is developed within the C# Langauge and f...ClientSideRefactor: Plugin for Visual Studio.ColinTest: ColinTestePMS: An educational project to learn ASP.Net MVC, entity framework using vs 2010Extensible ASP.NET: Extensible Framework on top of ASP.NET - infrastructure level. Uses MEF for extensibility.Franchise Computing Model: Franchise Computing is a client-centric, contract-oriented, consumption-based computing model. Its framework allows service providers and consumers...GameEngine ReactorFX: Set of tools and code snippets for creation DirectX based games. Also provides a number of ideas, algorythms and problem-solutions.It's All Just Ones And Zeros: Utility code libraries for Vault API developers.Live Writer Picasa Plugin: Live Writer Picasa Plugin is a plugin for Windows Live Writer that allows you to embed photos from your Picasa Web Albums into your blog posts. Liv...Managed SDK for Meizu Cell Phone: The goal of this project is to deliver an open source managed SDK for Meizu cell phones, currently for M8. Media Player Field Type: Display a media player in a column of you document library. The library can contain movie files of diferent formats. The player will appear in the ...praca magisterska: This is my thesis: Algebraical aspects of modern cryptography,Pyx: An experimental programming language for statistics.SharpHydroLiDAR: A C# version of Lidar Hydrographic ExtractionSql Server Mds Destination: SSIS destination transform component for SQL Server Master Data ServicesStackOverflow.Net: A C# library for the StackOverflow API (currently in beta). Provides methods for every call currently in the StackOverflow API.TRX Merger Utility: People working on test projects that involve test management and execution from Visual Studio Team System 2008 and who do not have a TFS server for...UniPlanner: The UniPlanner project goal is to develop a web application able to visualize and schedule a university timetable.WikiNETParser: Wiki .NET Parser, Open Source project powered by ANTLR. Syntax defined in 3(4) files Lexer, Grammar, AST Parser.New ReleasesaaronERP builder - a framework to create customized ERP solutions: aaronERP_0.4.0.0: Changes (compared to version 0.3.0.0) : Businesslayer : - Caching of data-tables - ITranslatable Interface for mutli-language DAOs Web-Frontend: ...BatterySaver: Version 0.5: Add support for executing a power state event manually (Issue) Add support for battery percentage thresholds (Issue)ColinTest: asdfzxcv: asdfasdfComposer: V1.0.402.2001 Beta: Minor bug fixes Minor changes in interfaces Added documentation to the setup packageDynamic Configuration: Dynamic Configuration Release 2: Added ConfigurationChanged event fired whenever changes in .config file detected. Improved file watching filtering.Facebook Developer Toolkit: Version 3.1 BETA: Lots of bug fixes. Issues addressed: http://facebooktoolkit.codeplex.com/WorkItem/View.aspx?WorkItemId=14808 http://facebooktoolkit.codeplex.com/W...iExporter - iTunes playlist exporting: iExporter gui v2.5.0.0 - console v1.2.1.0: Paypal donate! New features and redesign for iExporter Gui You can now select/deselect all visible items with one click in the overview When yo...Line Counter: 1.5.5: The Line Counter is a tool to calculate lines of your code files. The tool was written in .NET 2.0. Line Counter 1.5.5 Fixed bugs in C# counter an...Live Writer Picasa Plugin: Live Writer Picasa Plugin 1.0.0: Changelog Since this is the first version there are no changes.Media Player Field Type: Media Player Field Type v1.0: Display a media player in a column of you document library. The library can contain movie files of diferent formats. The player will appear in the ...Numina Application/Security Framework: Numina.Framework Core 49601: Added .LESS library for CSS Updated default style and logo Added a few methods and method overloads to the .NET libraryOver Store: OverStore 1.16.0.0: Version 1.16.0.0 Runtime components uses PersistingRuntimeException instead of many exception types. PersistingRuntimeException message includes...patterns & practices Web Client Developer Guidance: Web Client Software Factory 2010 beta source code: The Web Client Software Factory 2010 provides an integrated set of guidance that assists architects and developers in creating web client applicati...SCSI Interface for Multimedia and Block Devices: Release 12 - View CD-DVD Drive Features: Changes in this version: - Added the ability to view the features of a CD/DVD device (e.g.: what discs it supports, whether it supports Mount Raini...SharePoint Labs: SPLab5006A-FRA-Level100: SPLab5006A-FRA-Level100 This SharePoint Lab will teach you how to create a Feature within Visual Studio, how to brand it, how to incorporate ressou...SharePoint Labs: SPLab5007A-FRA-Level300: SPLab5007A-FRA-Level300 This SharePoint Lab will teach you how to create a reusable and distributable project model for developping Features within...SharePoint Labs: SPLab5008A-FRA-Level100: SPLab5008A-FRA-Level100 This SharePoint Lab will teach you how to add an option in the ECB menu (Edit Control Block) only for specific file types w...SharePoint Labs: SPLab5009A-FRA-Level100: SPLab5009A-FRA-Level100 This SharePoint Lab will teach you the "Site Pages" model and the differences between customized/uncustomized pages (ghoste...SharePoint Labs: SPLab5010A-FRA-Level100: SPLab5010A-FRA-Level100 This SharePoint Lab will teach you the "Application Pages" model and the differences between "Site Pages" and "Application ...SharePoint Labs: SPLab5011A-FRA-Level100: SPLab5011A-FRA-Level100 This SharePoint Lab will teach you how to create a basic Application Page in the 12\TEMPLATE\LAYOUTS. Lab Language : French...sPATCH: sPatch v0.9b: + Fixed: an issue most webservers need leading slash to return filestreamsTASKedit: sTASKedit (pre-Alpha Release): This release is only for playing around, currently not useful Supported Files:Open 1.3.6 client tasks.data Export to 1.3.6 client tasks.data E...TRX Merger Utility: TRX Merger v1.0: First versionttgLib: ttgLib-0.01-beta1: In beta-version we've implemented basic functionality of ttgLib - now it can solve various problems using CPU+GPU bundle. Most important things: ...WikiNETParser: Wiki .NET Parser 2.5: Wiki .NET Parser 2.5 The documentation, binaries and source code could be downloaded from http://catarsa.com portal The latest release to downloa...WPF Zen Garden: Release 1.0: This is the first release.XNA 3D World Studio Content Pipeline: XNA 3DWS Content Pipeline - R2: This version adds terrains and brush based modelsMost Popular ProjectsRawrWBFS ManagerMicrosoft SQL Server Product Samples: DatabaseASP.NET Ajax LibrarySilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesDotNetNuke® Community EditionMost Active ProjectsGraffiti CMSRawrjQuery Library for SharePoint Web ServicesFacebook Developer ToolkitBlogEngine.NETN2 CMSBase Class LibrariesFarseer Physics EngineLINQ to TwitterMicrosoft Biology Foundation

    Read the article

  • ArchBeat Link-o-Rama for 11/15/2011

    - by Bob Rhubart
    Java Magazine - November/December 2011 - by and for the Java Community Java Magazine is an essential source of knowledge about Java technology, the Java programming language, and Java-based applications for people who rely on them in their professional careers, or who aspire to. Enterprise 2.0 Conference: November 14-17 | Kellsey Ruppel "Oracle is proud to be a Gold sponsor of the Enterprise 2.0 West Conference, November 14-17, 2011 in Santa Clara, CA. You will see the latest collaboration tools and technologies, and learn from thought leaders in Enterprise 2.0's comprehensive conference." The Return of Oracle Wikis: Bigger and Better | @oracletechnet The Oracle Wikis are back - this time, with Oracle SSO on top and powered by Atlassian's Confluence technology. These wikis offer quite a bit more functionality than the old platform. Cloud Migration Lifecycle | Tom Laszewski Laszewski breaks down the four steps in the Set Up Phase of the Cloud Migration lifecycle. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ - Dec14 Spend the day with your peers learning from Oracle experts in engineered systems, cloud computing, Oracle Coherence, Oracle WebLogic, and more. Registration is free, but seating is limited. SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Live Webcast: New Innovations in Oracle Linux Date: Tuesday, November 15, 2011 Time: 9:00 AM PT / Noon ET Speakers: Chris Mason, Elena Zannoni. People in glass futures should throw stones | Nicholas Carr "Remember that Microsoft video on our glassy future? Or that one from Corning? Or that one from Toyota?" asks Carr. "What they all suggest, and assume, is that our rich natural 'interface' with the world will steadily wither away as we become more reliant on software mediation." Integration of SABSA Security Architecture Approaches with TOGAF ADM | Jeevak Kasarkod Jeevak Kasarkod's overview of a new paper from the OpenGroup and the SABSA institute "which delves into the incorporatation of risk management and security architecture approaches into a well established enterprise architecture methodology - TOGAF." Cloud Computing at the Tactical Edge | Grace Lewis - SEI Lewis describes the SEI's work with Cloudlets, " lightweight servers running one or more virtual machines (VMs), [that] allow soldiers in the field to offload resource-consumptive and battery-draining computations from their handheld devices to nearby cloudlets." Simplicity Is Good | James Morle "When designing cluster and storage networking for database platforms, keep the architecture simple and avoid the complexities of multi-tier topologies," says Morle. "Complexity is the enemy of availability." Mainframe as the cloud? Tom Laszewski There's nothing new about using the mainframe in the cloud, says Laszewski. Let Devoxx 2011 begin! | The Aquarium The Aquarium marks the kick-off of Devoxx 2011 with "a quick rundown of the Java EE and GlassFish side of things."

    Read the article

  • Még egy kis Iron Man 2...

    - by Fekete Zoltán
    Az Oracle website-on az Iron Man 2 oldalon található sok információ többek között ezekrol a témákról: - Oracle Cloud Computing - Exadata, Sun Oracle Database Machine - Oracle Enterprise Manager Megnézhetjük az Iron Man 2 trailert is: Iron Man 2 oldalon, továbbá a speciális Oracle szoftver és hardverrol is szóló mozi elozetes megmutatja mit jelent ez a kombináció az innovatív cégeknek.

    Read the article

  • Penguin's HPC Waddle

    Server Snapshot: Penguin Computing has always, as its name implies, focused on developing best practices for Linux-based systems, software and services, particularly in the HPC space.

    Read the article

  • Penguin's HPC Waddle

    Server Snapshot: Penguin Computing has always, as its name implies, focused on developing best practices for Linux-based systems, software and services, particularly in the HPC space.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >