Search Results

Search found 6920 results on 277 pages for 'block'.

Page 244/277 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • How To Get Web Site Thumbnail Image In ASP.NET

    - by SAMIR BHOGAYTA
    Overview One very common requirement of many web applications is to display a thumbnail image of a web site. A typical example is to provide a link to a dynamic website displaying its current thumbnail image, or displaying images of websites with their links as a result of search (I love to see it on Google). Microsoft .NET Framework 2.0 makes it quite easier to do it in a ASP.NET application. Background In order to generate image of a web page, first we need to load the web page to get their html code, and then this html needs to be rendered in a web browser. After that, a screen shot can be taken easily. I think there is no easier way to do this. Before .NET framework 2.0 it was quite difficult to use a web browser in C# or VB.NET because we either have to use COM+ interoperability or third party controls which becomes headache later. WebBrowser control in .NET framework 2.0 In .NET framework 2.0 we have a new Windows Forms WebBrowser control which is a wrapper around old shwdoc.dll. All you really need to do is to drop a WebBrowser control from your Toolbox on your form in .NET framework 2.0. If you have not used WebBrowser control yet, it's quite easy to use and very consistent with other Windows Forms controls. Some important methods of WebBrowser control are. public bool GoBack(); public bool GoForward(); public void GoHome(); public void GoSearch(); public void Navigate(Uri url); public void DrawToBitmap(Bitmap bitmap, Rectangle targetBounds); These methods are self explanatory with their names like Navigate function which redirects browser to provided URL. It also has a number of useful overloads. The DrawToBitmap (inherited from Control) draws the current image of WebBrowser to the provided bitmap. Using WebBrowser control in ASP.NET 2.0 The Solution Let's start to implement the solution which we discussed above. First we will define a static method to get the web site thumbnail image. public static Bitmap GetWebSiteThumbnail(string Url, int BrowserWidth, int BrowserHeight, int ThumbnailWidth, int ThumbnailHeight) { WebsiteThumbnailImage thumbnailGenerator = new WebsiteThumbnailImage(Url, BrowserWidth, BrowserHeight, ThumbnailWidth, ThumbnailHeight); return thumbnailGenerator.GenerateWebSiteThumbnailImage(); } The WebsiteThumbnailImage class will have a public method named GenerateWebSiteThumbnailImage which will generate the website thumbnail image in a separate STA thread and wait for the thread to exit. In this case, I decided to Join method of Thread class to block the initial calling thread until the bitmap is actually available, and then return the generated web site thumbnail. public Bitmap GenerateWebSiteThumbnailImage() { Thread m_thread = new Thread(new ThreadStart(_GenerateWebSiteThumbnailImage)); m_thread.SetApartmentState(ApartmentState.STA); m_thread.Start(); m_thread.Join(); return m_Bitmap; } The _GenerateWebSiteThumbnailImage will create a WebBrowser control object and navigate to the provided Url. We also register for the DocumentCompleted event of the web browser control to take screen shot of the web page. To pass the flow to the other controls we need to perform a method call to Application.DoEvents(); and wait for the completion of the navigation until the browser state changes to Complete in a loop. private void _GenerateWebSiteThumbnailImage() { WebBrowser m_WebBrowser = new WebBrowser(); m_WebBrowser.ScrollBarsEnabled = false; m_WebBrowser.Navigate(m_Url); m_WebBrowser.DocumentCompleted += new WebBrowserDocument CompletedEventHandler(WebBrowser_DocumentCompleted); while (m_WebBrowser.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents(); m_WebBrowser.Dispose(); } The DocumentCompleted event will be fired when the navigation is completed and the browser is ready for screen shot. We will get screen shot using DrawToBitmap method as described previously which will return the bitmap of the web browser. Then the thumbnail image is generated using GetThumbnailImage method of Bitmap class passing it the required thumbnail image width and height. private void WebBrowser_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser m_WebBrowser = (WebBrowser)sender; m_WebBrowser.ClientSize = new Size(this.m_BrowserWidth, this.m_BrowserHeight); m_WebBrowser.ScrollBarsEnabled = false; m_Bitmap = new Bitmap(m_WebBrowser.Bounds.Width, m_WebBrowser.Bounds.Height); m_WebBrowser.BringToFront(); m_WebBrowser.DrawToBitmap(m_Bitmap, m_WebBrowser.Bounds); m_Bitmap = (Bitmap)m_Bitmap.GetThumbnailImage(m_ThumbnailWidth, m_ThumbnailHeight, null, IntPtr.Zero); } One more example here : http://www.codeproject.com/KB/aspnet/Website_URL_Screenshot.aspx

    Read the article

  • How to Tell If Your Computer is Overheating and What to Do About It

    - by Chris Hoffman
    Heat is a computer’s enemy. Computers are designed with heat dispersion and ventilation in mind so they don’t overheat. If too much heat builds up, your computer may become unstable or suddenly shut down. The CPU and graphics card produce much more heat when running demanding applications. If there’s a problem with your computer’s cooling system, an excess of heat could even physically damage its components. Is Your Computer Overheating? When using a typical computer in a typical way, you shouldn’t have to worry about overheating at all. However, if you’re encountering system instability issues like abrupt shut downs, blue screens, and freezes — especially while doing something demanding like playing PC games or encoding video — your computer may be overheating. This can happen for several reasons. Your computer’s case may be full of dust, a fan may have failed, something may be blocking your computer’s vents, or you may have a compact laptop that was never designed to run at maximum performance for hours on end. Monitoring Your Computer’s Temperature First, bear in mind that different CPUs and GPUs (graphics cards) have different optimal temperature ranges. Before getting too worried about a temperature, be sure to check your computer’s documentation — or its CPU or graphics card specifications — and ensure you know the temperature ranges your hardware can handle. You can monitor your computer’s temperatures in a variety of different ways. First, you may have a way to monitor temperature that is already built into your system. You can often view temperature values in your computer’s BIOS or UEFI settings screen. This allows you to quickly see your computer’s temperature if Windows freezes or blue screens on you — just boot the computer, enter the BIOS or UEFI screen, and check the temperatures displayed there. Note that not all BIOSes or UEFI screens will display this information, but it is very common. There are also programs that will display your computer’s temperature. Such programs just read the sensors inside your computer and show you the temperature value they report, so there are a wide variety of tools you can use for this, from the simple Speccy system information utility to an advanced tool like SpeedFan. HWMonitor also offer this feature, displaying a wide variety of sensor information. Be sure to look at your CPU and graphics card temperatures. You can also find other temperatures, such as the temperature of your hard drive, but these components will generally only overheat if it becomes extremely hot in the computer’s case. They shouldn’t generate too much heat on their own. If you think your computer may be overheating, don’t just glance as these sensors once and ignore them. Do something demanding with your computer, such as running a CPU burn-in test with Prime 95, playing a PC game, or running a graphical benchmark. Monitor the computer’s temperature while you do this, even checking a few hours later — does any component overheat after you push it hard for a while? Preventing Your Computer From Overheating If your computer is overheating, here are some things you can do about it: Dust Out Your Computer’s Case: Dust accumulates in desktop PC cases and even laptops over time, clogging fans and blocking air flow. This dust can cause ventilation problems, trapping heat and preventing your PC from cooling itself properly. Be sure to clean your computer’s case occasionally to prevent dust build-up. Unfortunately, it’s often more difficult to dust out overheating laptops. Ensure Proper Ventilation: Put the computer in a location where it can properly ventilate itself. If it’s a desktop, don’t push the case up against a wall so that the computer’s vents become blocked or leave it near a radiator or heating vent. If it’s a laptop, be careful to not block its air vents, particularly when doing something demanding. For example, putting a laptop down on a mattress, allowing it to sink in, and leaving it there can lead to overheating — especially if the laptop is doing something demanding and generating heat it can’t get rid of. Check if Fans Are Running: If you’re not sure why your computer started overheating, open its case and check that all the fans are running. It’s possible that a CPU, graphics card, or case fan failed or became unplugged, reducing air flow. Tune Up Heat Sinks: If your CPU is overheating, its heat sink may not be seated correctly or its thermal paste may be old. You may need to remove the heat sink and re-apply new thermal paste before reseating the heat sink properly. This tip applies more to tweakers, overclockers, and people who build their own PCs, especially if they may have made a mistake when originally applying the thermal paste. This is often much more difficult when it comes to laptops, which generally aren’t designed to be user-serviceable. That can lead to trouble if the laptop becomes filled with dust and needs to be cleaned out, especially if the laptop was never designed to be opened by users at all. Consult our guide to diagnosing and fixing an overheating laptop for help with cooling down a hot laptop. Overheating is a definite danger when overclocking your CPU or graphics card. Overclocking will cause your components to run hotter, and the additional heat will cause problems unless you can properly cool your components. If you’ve overclocked your hardware and it has started to overheat — well, throttle back the overclock! Image Credit: Vinni Malek on Flickr     

    Read the article

  • PASS: 2013 Summit Location

    - by Bill Graziano
    HQ recently posted a brief update on our search for a location for 2013.  It includes links to posts by four Board members and two community members. I’d like to add my thoughts to the mix and ask you a question.  But I can’t give you a real understanding without telling you some history first. So far we’ve had the Summit in Chicago, San Francisco, Orlando, Dallas, Denver and Seattle.  Each has a little different feel and distinct memories.  I enjoyed getting drinks by the pool in Orlando after the sessions ended.  I didn’t like that our location in Dallas was so far away from all the nightlife.  Denver was in downtown but we had real challenges with hotels.  I enjoyed the different locations.  I always enjoyed the announcement during the third keynote with the location of the next Summit. There are two big events that impacted my thinking on the Summit location.  The first was our transition to the new management company in early 2007.  The event that September in Denver was put on with a six month planning cycle by a brand new headquarters staff.  It wasn’t perfect but came off much better than I had dared to hope.  It also moved us out of the cookie cutter conferences that we used to do into a model where we have a lot more control.  I think you’ll all agree that the production values of our last few Summits have been fantastic.  That Summit also led to our changing relationship with Microsoft.  Microsoft holds two seats on the PASS Board.  All the PASS Board members face the same challenge: we all have full-time jobs and PASS comes in second place professionally (or sometimes further back).  Starting in 2008 we were assigned a liaison from Microsoft that had a much larger block of time to coordinate with us.  That changed everything between PASS and Microsoft.  Suddenly we were talking to product marketing, Microsoft PR, their event team, the Tech*Ed team, the education division, their user group team and their field sales team – locally and internationally.  We strengthened our relationship with CSS, SQLCAT and the engineering teams.  We had exposure at the executive level that we’d never had before.  And their level of participation at the Summit changed from under 100 people to 400-500 people.  I think those 400+ Microsoft employees have value at a conference on Microsoft SQL Server.  For the first time, Seattle had a real competitive advantage over other cities. I’m one that looked very hard at staying in Seattle for a long, long time.  I think those Microsoft engineers have value to our attendees.  I think the increased support that Microsoft can provide when we’re in Seattle has value to our attendees.  But that doesn’t tell the whole story.  There’s a significant (and vocal!) percentage of our membership that wants the Summit outside Seattle.  Post-2007 PASS doesn’t know what it’s like to have a Summit outside of Seattle.  I think until we have a Summit in another city we won’t really know the trade-offs. I think a model where we move every third or every other year is interesting.  But until we have another Summit outside Seattle and we can evaluate the logistics and how important it is to have depth and variety in our Microsoft participation we won’t really know. Another benefit that comes with a move is variety or diversity.  I learn more when I’m exposed to new things and new people.  I believe that moving the Summit will give a different set of people an opportunity to attend. Grant Fritchey writes “It seems that the board is leaning, extremely heavily, towards making it a permanent fixture in Seattle.”  I don’t believe that’s true.  I know there was discussion of that earlier but I don’t believe it’s true now. And that brings me to my question.  Do we announce the city now or do we wait until the 2012 Summit?  I’m happy to announce Seattle vs. not-Seattle as soon as we sign the contract.  But I’d like to leave the actual city announcement until the 2011 Summit.  I like the drama and mystery of it.  I also like that it doesn’t give you a reason to skip a Summit and wait for the next one if it’s closer or back in Seattle.  The other side of the coin is that your planning is easier if you know where it is.  What do you think?

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • HP Pavilion tx2000 - Wifi adapter no longer works after moving from 12.04 to a 12.10 clean install

    - by Marek L.
    I have a HP Pavilion tx2000 that I have been running Ubuntu 12.04 on for a couple of months without any problems (wifi worked great) until yesterday when my hard drive failed. I replaced the hard drive and decided to install Ubuntu 12.10. Unlike 12.04, the wifi did not work after the installation finished and all the updates where installed (over Ethernet). The network drop down in the top right didn't even show a wireless option. I Googled about for a bit and found some solutions that seemed like they might work. Unfortunately they did not. Here is what I tried: sudo apt-get remove bcmwl-kernel-source sudo apt-get install b43-fwcutter sudo apt-get install firmware-b43-lpphy-installer Restart the computer. And the wifi still didn't work. At which point I panicked a bit and tried to undo the previous commands by running: sudo apt-get remove b43-fwcutter firmware-b43-lpphy-installer sudo apt-get install bcmwl-kernel-source Restart the computer. The wifi still doesn't work. This is where I stopped because I have no idea what I am doing and don't want to mess something up. The network drop down still doesn't show a wireless option and the hardware wifi switch on the laptop is amber (it turns blue when the wifi is on). Using the hardware switch does not change the color. Output from: sudo lspci ... 08:00.0 Network controller: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller (rev 01) ... Output from: sudo lshw -class network *-network UNCLAIMED description: Network controller product: BCM4322 802.11a/b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:08:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:d1100000-d1103fff ... Output from: sudo rfkill list all 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes UPDATE: After writing up this question tried the following command: sudo rfkill unblock all At first it didn't do anything but after running it about four times, sudo rfkill list all now returns: 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no But the network menu still does not have a wireless option and the hardware switch still glows amber. Pushing the hardware switch turns the hard block back on and I have to run sudo rfkill unblock all multiple times again to turn it off. Any help is appreciated! Update 2: Full output from sudo lspci -nn: 00:00.0 Host bridge [0600]: Advanced Micro Devices [AMD] RS780 Host Bridge [1022:9600] 00:01.0 PCI bridge [0604]: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (int gfx) [1022:9602] 00:04.0 PCI bridge [0604]: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 0) [1022:9604] 00:05.0 PCI bridge [0604]: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 1) [1022:9605] 00:06.0 PCI bridge [0604]: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) [1022:9606] 00:11.0 SATA controller [0106]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] [1002:4391] 00:12.0 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller [1002:4397] 00:12.1 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller [1002:4398] 00:12.2 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller [1002:4396] 00:13.0 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller [1002:4397] 00:13.1 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller [1002:4398] 00:13.2 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller [1002:4396] 00:14.0 SMBus [0c05]: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller [1002:4385] (rev 3a) 00:14.1 IDE interface [0101]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 IDE Controller [1002:439c] 00:14.2 Audio device [0403]: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) [1002:4383] 00:14.3 ISA bridge [0601]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller [1002:439d] 00:14.4 PCI bridge [0604]: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge [1002:4384] 00:14.5 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI2 Controller [1002:4399] 00:18.0 Host bridge [0600]: Advanced Micro Devices [AMD] Family 11h Processor HyperTransport Configuration [1022:1300] (rev 40) 00:18.1 Host bridge [0600]: Advanced Micro Devices [AMD] Family 11h Processor Address Map [1022:1301] 00:18.2 Host bridge [0600]: Advanced Micro Devices [AMD] Family 11h Processor DRAM Controller [1022:1302] 00:18.3 Host bridge [0600]: Advanced Micro Devices [AMD] Family 11h Processor Miscellaneous Control [1022:1303] 00:18.4 Host bridge [0600]: Advanced Micro Devices [AMD] Family 11h Processor Link Control [1022:1304] 01:05.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI RS780M/RS780MN [Mobility Radeon HD 3200 Graphics] [1002:9612] 08:00.0 Network controller [0280]: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller [14e4:432b] (rev 01) 09:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 02)

    Read the article

  • XNA: Huge Tile Map, long load times

    - by Zach
    Recently I built a tile map generator for a game project. What I am very proud of is that I finally got it to the point where I can have a GIANT 2D map build perfectly on my PC. About 120000pixels by 40000 pixels. I can go larger actually, but I have only 1 draw back. #1 ram, the map currently draws about 320MB of ram and I know the Xbox allows 512MB I think? #2 It takes 20 mins for the map to build then display on the Xbox, on my PC it take less then a few seconds. I need to bring that 20 minutes of generating from 20 mins to how ever little bit I can, and how can a lower the amount of RAM usage while still being able to generate my map. Right now everything is stored in Jagged Arrays, each piece generating in a size of 1280x720 (the mother piece). Up to the amount that I need, every block is exactly 40x40 pixels however the blocks get removed from a List or regenerated in a List depending how close the mother piece is to the player. Saving A LOT of CPU, so at all times its no more then looping through 5184 some blocks. Well at least I'm sure of this. But how can I lower my RAM usage without hurting the size of the map, and how can I lower these INSANE loading times? EDIT: Let me explain my self better. Also I'd like to let everyone know now that I'm inexperienced with many of these things. So here is an example of the arrays I'm using. Here is the overall in a shorter term: int[][] array = new int[30][]; array[0] = new int[] { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; array[1] = new int[] { 1, 3, 3, 3, 3, 1, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; that goes on for around 30 arrays downward. Now for every time it hits a 1, it goes and generates a tile map 1280x720 and it does that exactly the way it does it above. This is how I loop through those arrays: for (int i = 0; i < array.Length; i += 1) { for (int h = 0; h < array[i].Length; h += 1) { } { Now how the tiles are drawn and removed is something like this: public void Draw(SpriteBatch spriteBatch, Vector2 cam) { if (cam.X >= this.Position.X - 1280) { if (cam.X <= this.Position.X + 2560) { if (cam.Y >= this.Position.Y - 720) { if (cam.Y <= this.Position.Y + 1440) { if (visible) { if (once == 0) { once = 1; visible = false; regen(); } } for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles[i].Draw(spriteBatch, cam); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles[i].Draw(spriteBatch, cam); } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } } If you guys still need more information just ask in the comments.

    Read the article

  • Installing Ubuntu 12.04.1 x64 with Fake RAID 1 [SOLVED]

    - by Arkadius
    I had: Software: Dual boot with Windows XP Ubuntu 10.04 LTS x32 Hardware Fake RAID 1 (mirroring) with 2x1 TB: Partition 1 - Windows Partition 2 - SWAP Partition 3 - / (root) Partition 4 - Extended Partition 5 - /home Partition 6 - /data arek@domek:/var/log/installer$ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sda1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sda2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sda3 528506370 570468149 20980890 83 Linux /dev/sda4 570468150 1953118439 691325145 5 Extended /dev/sda5 570468213 675340469 52436128+ 83 Linux /dev/sda6 675340533 1953118439 638888953+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sdb1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sdb2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sdb3 528506370 570468149 20980890 83 Linux /dev/sdb4 570468150 1953118439 691325145 5 Extended /dev/sdb5 570468213 675340469 52436128+ 83 Linux /dev/sdb6 675340533 1953118439 638888953+ 83 Linux arek@domek:/var/log/installer$ ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 236 Oct 7 20:17 control lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha -> ../dm-0 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha1 -> ../dm-1 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha2 -> ../dm-2 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha3 -> ../dm-3 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha4 -> ../dm-4 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha5 -> ../dm-5 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha6 -> ../dm-6 I wanted to upgrade from 10.04 x32 to 12.04 x64 using FRESH installation. So, run installation of Ubuntu 12.04.1 x64 LTS using alternate CD. During the installation I selected manual partitioning and to: - Use and Format / (root) - Use and Format SWAP - Use and Keep data on /home - Use and Keep data on /data After I clicked "Continue" I get error creating and formatting SWAP partition. I go to terminal with Alt + F2 (?) and hit enter. I discovered that there was visible RAID as only disk with NO partitions. Something like this: arek@domek:/var/log/installer$ ls -l /dev/mapper/ lrwxrwxrwx 1 root root 7 Oct 7 20:17 /dev/mapper/pdc_jhjbcaha -> ../dm-0 arek@domek:/var/log/installer$ ls -l /dev/dm* brw-rw---- 1 root disk 252, 0 Oct 7 20:17 /dev/dm-0 So I switched to log console Alt+F3 (?) and saw errors like below: Oct 7 14:02:45 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:45 anna-install: Installing dmraid-udeb Oct 7 14:02:45 anna[12599]: DEBUG: retrieving dmraid-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving libdmraid1.0.0.rc16-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving kpartx-udeb 0.4.9-3ubuntu5 Oct 7 14:02:49 disk-detect: Serial ATA RAID disk(s) detected. Oct 7 14:02:55 disk-detect: Enabling dmraid support. Oct 7 14:02:55 disk-detect: RAID set "pdc_jhjbcaha" was activated Oct 7 14:02:55 HERE --> dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha Oct 7 14:02:56 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (libnewt0.52): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (ext2-modules): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: INFO: Menu item 'partman-base' selected Oct 7 14:02:57 kernel: [ 316.512999] NTFS driver 2.1.30 [Flags: R/O MODULE]. Oct 7 14:02:57 kernel: [ 316.523221] Btrfs loaded Oct 7 14:02:57 kernel: [ 316.534781] JFS: nTxBlock = 8192, nTxLock = 65536 Oct 7 14:02:57 kernel: [ 316.554749] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled Oct 7 14:02:57 kernel: [ 316.555336] SGI XFS Quota Management subsystem Oct 7 14:02:58 md-devices: mdadm: No arrays found in config file or automatically Oct 7 14:02:58 partman: No matching physical volumes found Oct 7 14:02:58 partman: No volume groups found Oct 7 14:02:58 partman: Reading all physical volumes. This may take a while... Oct 7 14:02:58 partman-lvm: No volume groups found Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:06:11 HERE --> partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory Oct 7 14:07:28 init: starting pid 401, tty '/dev/tty2': '-/bin/sh' Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface lo As You can see there are 2 errors Oct 7 14:02:55 dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha and Oct 7 14:06:11 partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory I looked in the internet and try to run command "dmraid -ay" and get something like that: dmraid -ay /dev/mapper/pdc_jhjbcaha -> Already activated /dev/mapper/pdc_jhjbcaha1 -> Successfully activated /dev/mapper/pdc_jhjbcaha2 -> Successfully activated /dev/mapper/pdc_jhjbcaha3 -> Successfully activated /dev/mapper/pdc_jhjbcaha4 -> Successfully activated /dev/mapper/pdc_jhjbcaha5 -> Successfully activated /dev/mapper/pdc_jhjbcaha6 -> Successfully activated Then I returned to installer with Alt+F1 (?) and click "Return" to return to partitioning menu. I did NOT change anything just selected again "Continue" and everything goes smoothly. I hope this will help someone. arkadius

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 7

    - by Max
    Download this article as a PDF Welcome back :) This week we are going to look at something more exciting and a much required feature for any twitter client – auto refresh so as to show new status updates. We are going to achieve this using Silverlight 4 Timers and a bit and refresh our datagrid every 2 minutes to show new updates. We will do this so that we do only minimal request to the twitter api, so that twitter does not block us – there is a limit of 150 request an hour. Let us get started now. Also we will get the profile user id hyperlinked, so that when ever the user click on it, we will take them to their twitter page. Also it was a pain to always run this application by pressing F5, then it would open in a browser you would have to right click uninstall and install it again to see any changes. All this and yet we were not able to debug it :( Now there is a solution for this to run a silverlight application directly out of browser and yet have the debug feature. Super cool, here is how. Right on the Silverlight project and go to debug and then select the Out-Of-Browser application option and choose the *.Web project. Then just right click on the SL project and set as Startup Project. There you go, now every time you press F5, it will automatically run out of browser and still have the debug options. I go to know about this after some binging. Now let us jump to the core straight away. 1) To get the user id hyperlinked, we need to have a DataGridTemplateColumn and within that have a HyperLinkButton. The code for this will  be <data:DataGridTemplateColumn> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <HyperlinkButton Click="HyperlinkButton_Click" Content="{Binding UserName}" TargetName="_blank" ></HyperlinkButton> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> 2) Now let us look at how we are getting this done by looking into HyperlinkButton_Click event handler. There we will dynamically set the NavigateUri to the twitter page. I tried to do this using some binding, eval like stuff as in ASP.NET, but no luck! private void HyperlinkButton_Click(object sender, RoutedEventArgs e) { HyperlinkButton hb = (HyperlinkButton)e.OriginalSource; hb.NavigateUri = new Uri("http://twitter.com/" + hb.Content.ToString(), UriKind.Absolute); } 3) Now we need to switch on our Timer right in the OnNavigated to event on our SL page. So we need to modify our OnNavigated event to some thing like below: protected override void OnNavigatedTo(NavigationEventArgs e) { image1.Source = new BitmapImage(new Uri(GlobalVariable.profileImage, UriKind.Absolute)); this.Title = GlobalVariable.getUserName() + " - Home"; if (!GlobalVariable.isLoggedin()) this.NavigationService.Navigate(new Uri("/Login", UriKind.Relative)); else { currentGrid = "Timeline-Grid"; TwitterCredentialsSubmit(); myDispatcherTimer.Interval = new TimeSpan(0, 0, 0, 60, 0); myDispatcherTimer.Tick += new EventHandler(Each_Tick); myDispatcherTimer.Start(); } } I use a global string – here it is currentGrid variable to indicate what is bound in the datagrid so that after every timer tick, I can rebind the latest data to it again. Like I will only rebind the friends timeline again if the data grid currently holds it and I’ll only rebind the respective list status again in the data grid, if already a list status is bound to the data grid. In the above timer code, its set to trigger the Each_Tick event handler every 1 minute (60 seconds). TimeSpan takes in (days, hours, minutes, seconds, milliseconds). 4) Now we need to set the list name in the currentGrid variable when a list button is clicked. So add the code line below to the list button event handler currentGrid = currentList = b.Content.ToString(); 5) Now let us see how Each_Tick event handler is implemented. public void Each_Tick(object o, EventArgs sender) { if (!currentGrid.Equals("Timeline-Grid")) getListStatuses(currentGrid); else { WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml")); } } If the data grid hold friends timeline, I just use the same bit of code we had already to bind the friends timeline to the data grid. Copy Paste. But if it is some list timeline that is bound in the datagrid, I then call the getListStatus method with the currentGrid string which will actually be holding the list name. 6) I wanted to make the hyperlinks inside the status message as hyperlinks and when the user clicks on it, we can then open that link. I tried using a convertor and using a regex to recognize a url and wrap it up with a href, but that is not gonna work in silverlight textblock :( Anyways that convertor code is in the zip file. 7) You can get the complete project files from here. 8) Please comment below for your doubts, suggestions, improvements. I will try to reply as early as possible. Thanks for all your support. Technorati Tags: Silverlight 4,Datagrid,Twitter API,Silverlight Timer

    Read the article

  • A Visual Studio Release Grows in Brooklyn

    - by andrewbrust
    Yesterday, Microsoft held its flagship launch event for Office 2010 in Manhattan.  Today, the Redmond software company is holding a local launch event for Visual Studio (VS) 2010, in Brooklyn.  How come information workers get the 212 treatment and developers are relegated to 718? Well, here’s the thing: the Brooklyn Marriott is actually a great place for an event, but you need some intimate knowledge of New York City to know that.  NBC’s Studio 8H, where the Office launch was held yesterday (and from where SNL is broadcast) is a pretty small venue, but you’d need some inside knowledge to recognize that.  Likewise, while Office 2010 is a product whose value is apparent.  Appreciating VS 2010’s value takes a bit more savvy.  Setting aside its year-based designation, this release of VS, counting the old Visual Basic releases, is the 10th version of the product.  How can a developer audience get excited about an integrated development environment when it reaches double-digit version numbers?  Well, it can be tough.  Luckily, Microsoft sent Jay Schmelzer, a Group Program Manager from the Visual Studio team in Redmond, to come tell the Brooklyn audience why they should be excited. Turns out there’s a lot of reasons.  Support fro SharePoint development is a big one.  In previous versions of VS, that support has been anemic, at best.  Shortage of SharePoint developers is a huge issue in the industry, and this should help.  There’s also built in support for Windows Azure (Microsoft’s cloud platform) and, through a download, support for the forthcoming Windows Phone 7 platform.  ASP.NET MVC, a “close-to-the-metal” Web development option that does away with the Web Forms abstraction layer, has a first-class presence in VS.  So too does jQuery, the Open Source environment that makes JavaScript development a breeze.  The jQuery support is so good that Microsoft now contributes to that Open Source project and offers IntelliSense support for it in the code editor. Speaking of the VS code editor, it now supports multi-monitor setups, zoom-in, and block selection.  If you’re not a developer, this may sound confusing and minute.  I’ll just say that for people who are developers these are little things that really contribute to productivity, and that translates into lower development costs. The really cool demo, though, was around Visual Studio 2010’s new debugging features.  This stuff is hard to showcase, but I believe it’s truly breakthrough technology: imagine being able to step backwards in time to see what might have caused a bug.  Cool?  Now imagine being able to do that, even if you weren’t the tester and weren’t present while the testing was being done.  Then imagine being able to see a video screen capture of what the tester was doing with your app when the bug occurred.  VS 2010 allows all that.  This could be the demise of the IWOMM (“it works on my machine”) syndrome. After the keynote, I asked Schmelzer if any of Microsoft’s competitors have debugging tools that come close to VS 2010’s.  His answer was an earnest “we don’t think so.”  If that’s true, that’s a big deal, and a huge advantage for developer teams who adopt it.  It will make software development much cheaper and more efficient.  Kind of like holding a launch event at the Brooklyn Marriott instead of 30 Rock in Manhattan! VS 2010 (version 10) and Office 2010 (version 14) aren’t the only new product versions Microsoft is releasing right now.  There’s also SQL Server 2008 R2 (version 10.5), Exchange 2010 (version 8, I believe), SharePoint 2010 (version 4) and, of course, Windows 7.  With so many new versions at such levels of maturity, I think it’s fair to say Microsoft has reached middle-age.  How does a company stave off a potential mid-life crisis, especially when with young Turks like Google coming along and competing so fiercely?  Hard to say.  But if focusing on core value, including value that’s hard to play into a sexy demo, is part oft the answer, then Microsoft’s doing OK.  And if some new tricks, like Windows Phone 7, can gain some traction, that might round things out nicely. Are the legacy products old tricks, or are they revised classics?  I honestly don’t know, because it’s the market’s prerogative to pass that judgement.  I can say this though: based on today’s show, I think Microsoft’s been doing its homework.

    Read the article

  • Composing Silverlight Applications With MEF

    - by PeterTweed
    Anyone who has written an application with complexity enough to warrant multiple controls on multiple pages/forms should understand the benefit of composite application development.  That is defining your application architecture that can be separated into separate pieces each with it’s own distinct purpose that can then be “composed” together into the solution. Composition can be useful in any layer of the application, from the presentation layer, the business layer, common services or data access.  Historically people have had different options to achieve composing applications from distinct well known pieces – their own version of dependency injection, containers to aid with composition like Unity, the composite application guidance for WPF and Silverlight and before that the composite application block. Microsoft has been working on another mechanism to aid composition and extension of applications for some time now – the Managed Extensibility Framework or MEF for short.  With Silverlight 4 it is part of the Silverlight environment.  MEF allows a much simplified mechanism for composition and extensibility compared to other mechanisms – which has always been the primary issue for adoption of the earlier mechanisms/frameworks. This post will guide you through the simple use of MEF for the scenario of composition of an application – using exports, imports and composition.  Steps: 1.     Create a new Silverlight 4 application. 2.     Add references to the following assemblies: System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll 3.     Add a new user control called LeftControl. 4.     Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Beige" Margin="40" >         <Button Content="Left Content" Margin="30"></Button>     </Grid> 5.     Add the following statement to the top of the LeftControl.xaml.cs file using System.ComponentModel.Composition; 6.     Add the following attribute to the LeftControl class     [Export(typeof(LeftControl))]   This attribute tells MEF that the type LeftControl will be exported – i.e. made available for other applications to import and compose into the application. 7.     Add a new user control called RightControl. 8.     Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Green" Margin="40"  >         <TextBlock Margin="40" Foreground="White" Text="Right Control" FontSize="16" VerticalAlignment="Center" HorizontalAlignment="Center" ></TextBlock>     </Grid> 9.     Add the following statement to the top of the RightControl.xaml.cs file using System.ComponentModel.Composition; 10.   Add the following attribute to the RightControl class     [Export(typeof(RightControl))] 11.   Add the following xaml to the LayoutRoot Grid in MainPage.xaml:         <StackPanel Orientation="Horizontal" HorizontalAlignment="Center">             <Border Name="LeftContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>             <Border Name="RightContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>         </StackPanel>   The borders will hold the controls that will be imported and composed via MEF. 12.   Add the following statement to the top of the MainPage.xaml.cs file using System.ComponentModel.Composition; 13.   Add the following properties to the MainPage class:         [Import(typeof(LeftControl))]         public LeftControl LeftUserControl { get; set; }         [Import(typeof(RightControl))]         public RightControl RightUserControl { get; set; }   This defines properties accepting LeftControl and RightControl types.  The attrributes are used to tell MEF the discovered type that should be applied to the property when composition occurs. 14.   Replace the MainPage constructore with the following code:         public MainPage()         {             InitializeComponent();             CompositionInitializer.SatisfyImports(this);             LeftContent.Child = LeftUserControl;             RightContent.Child = RightUserControl;         }   The CompositionInitializer.SatisfyImports(this) function call tells MEF to discover types related to the declared imports for this object (the MainPage object).  At that point, types matching those specified in the import defintions are discovered in the executing assembly location of the application and instantiated and assigned to the matching properties of the current object. 15.   Run the application and you will see the left control and right control types displayed in the MainPage:   Congratulations!  You have used MEF to dynamically compose user controls into a parent control in a composite application model. In the next post we will build on this topic to cover using MEF to compose Silverlight applications dynamically in download on demand scenarios – so .xap packages can be downloaded only when needed, avoiding large initial download for the main application xap. Take the Slalom Challenge at www.slalomchallenge.com!

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Pace Layering Comes Alive

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Rick Beers is Senior Director of Product Management for Oracle Fusion Middleware. Prior to joining Oracle, Rick held a variety of executive operational positions at Corning, Inc. and Bausch & Lomb. With a professional background that includes senior management positions in manufacturing, supply chain and information technology, Rick brings a unique set of experiences to cover the impact that technology can have on business models, processes and organizations. Rick hosts the IT Leaders Editorial on a monthly basis. By now, readers of this column are quite familiar with Oracle AppAdvantage, a unified framework of middleware technologies, infrastructure and applications utilizing a pace layered approach to enterprise systems platforms. 1. Standardize and Consolidate core Enterprise Applications by removing invasive customizations, costly workarounds and the complexity that multiple instances creates. 2. Move business specific processes and applications to the Differentiate Layer, thus creating greater business agility with process extensions and best of breed applications managed by cross- application process orchestration. 3. The Innovate Layer contains all the business capabilities required for engagement, collaboration and intuitive decision making. This is the layer where innovation will occur, as people engage one another in a secure yet open and informed way. 4. Simplify IT by minimizing complexity, improving performance and lowering cost with secure, reliable and managed systems across the entire Enterprise. But what hasn’t been discussed is the pace layered architecture that Oracle AppAdvantage adopts. What is it, what are its origins and why is it relevant to enterprise scale applications and technologies? It’s actually a fascinating tale that spans the past 20 years and a basic understanding of it provides a wonderful context to what is evolving as the future of enterprise systems platforms. It all begins in 1994 with a book by noted architect Stewart Brand, of ’Whole Earth Catalog’ fame. In his 1994 book How Buildings Learn, Brand popularized the term ‘Shearing Layers’, arguing that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In 1997 he produced a 6 part BBC Series adapted from the book, in which Part 6 focuses on Shearing Layers. In this segment Brand begins to introduce the concept of ‘pace’. Brand further refined this idea in his subsequent book, The Clock of the Long Now, which began to link the concept of Shearing Layers to computing and introduced the term ‘pace layering’, where he proposes that: “An imperative emerges: an adaptive [system] has to allow slippage between the differently-paced systems … otherwise the slow systems block the flow of the quick ones and the quick ones tear up the slow ones with their constant change. Embedding the systems together may look efficient at first but over time it is the opposite and destructive as well.” In 2000, IBM architects Ian Simmonds and David Ing published a paper entitled A Shearing Layers Approach to Information Systems Development, which applied the concept of Shearing Layers to systems design and development. It argued that at the time systems were still too rigid; that they constrained organizations by their inability to adapt to changes. The findings in the Conclusions section are particularly striking: “Our starting motivation was that enterprises need to become more adaptive, and that an aspect of doing that is having adaptable computer systems. The challenge is then to optimize information systems development for change (high maintenance) rather than stability (low maintenance). Our response is to make it explicit within software engineering the notion of shearing layers, and explore it as the principle that systems should be built to be adaptable in response to the qualitatively different rates of change to which they will be subjected. This allows us to separate functions that should legitimately change relatively slowly and at significant cost from that which should be changeable often, quickly and cheaply.” The problem at the time of course was that this vision of adaptable systems was simply not possible within the confines of 1st generation ERP, which were conceived, designed and developed for standardization and compliance. It wasn’t until the maturity of open, standards based integration, and the middleware innovation that followed, that pace layering became an achievable goal. And Oracle is leading the way. Oracle’s AppAdvantage framework makes pace layering come alive by taking a strategic vision 20 years in the making and transforming it to a reality. It allows enterprises to retain and even optimize their existing ERP systems, while wrapping around those ERP systems three layers of capabilities that inherently adapt as needed, at a pace that’s optimal for the enterprise.

    Read the article

  • creating a pre-menu level select screen

    - by Ephiras
    Hi I am working on creating a tower Defence java applet game and have come to a road block about implementing a title screen that i can select the level and difficulty of the rest of the game. my title screen class is called Menu. from this menu class i need to pass in many different variables into my Main class. i have used different classes before and know how to run them and such. but if both classes extend applet and each has its individual graphics method how can i run things from Main even though it was created in Menu. what i essentially want to do is run the Menu class withits action listeners and graphics until a Difficulty button has been selected, run the main class (which 100% works without having to have the Menu class) and pretty much terminate Menu so that i cannot go back to it, do not see its buttons or graphics menus. can i run one applet annd when i choose a button close that one and launch the other one? IF you would like to download the full project you can find it here, i had to comment out all the code that wasn't working my Menu class import java.awt.*; import java.awt.event.*; import java.applet.*; public class Menu extends Applet implements ActionListener{ Button bEasy,bMed,bHard; Main m; public void init(){ bEasy= new Button("Easy"); bEasy.setBounds(140,200,100,50); add(bEasy); bMed = new Button("Medium");bMed.setBounds(280,200,100,50); add(bMed); bHard= new Button("Hard");bHard.setBounds(420,200,100,50); add(bHard); setLayout(null); } public void actionPerformed(ActionEvent e){ Main m = new Main(20,10,3000,mapMed);//break; switch (e.getSource()){ case bEasy: Main m = new Main(6000,20,"levels/levelEasy.png");break;//enimies tower money world case bMed: Main m = new Main(4000,15,"levels/levelMed.png");break; case bHard: Main m = new Main(2000,10,"levels/levelEasy.png");break; default: break; } } public void paint(){ //m.draw(g) } } and here is my main class initialising code. import java.awt.*; import java.awt.event.*; import java.applet.*; import java.io.IOException; public class Main extends Applet implements Runnable, MouseListener, MouseMotionListener, ActionListener{ Button startButton, UpgRange, UpgDamage; //set up the buttons Color roadCol,startCol,finCol,selGrass,selRoad; //set up the colors Enemy e[][]; Tower t[]; Image towerpic,backpic,roadpic,levelPic; private Image i; private Graphics doubleG; //here is the world 0=grass 1=road 2=start 3=end int world[][],eStartX,eStartY; boolean drawMouse,gameEnd; static boolean start=false; static int gridLength=15; static int round=0; int Mx,My,timer=1500; static int sqrSize=31; int towers=0,towerSelected=-10; static int castleHealth=2000; String levelPath; //choose the level Easy Med or Hard int maxEnemy[] = {5,7,12,20,30,15,50,30,40,60};//number of enimies per round int maxTowers=15;//maximum number of towers allowed static int money =2000,damPrice=600,ranPrice=350,towerPrice=700; //money = the intial ammount of money you start of with //damPrice is the price to increase the damage of a tower //ranPrice is the price to increase the range of a tower public void main(int cH,int mT,int mo,int dP,int rP,int tP,String path,int[] mE)//constructor 1 castleHealth=cH; maxTowers=mT; money=mo; damPrice=dP; ranPrice=rP; towerPrice=tP; String levelPath=path; maxEnemy = mE; buildLevel(); } public void main(int cH,int mT,String path)//basic constructor castleHealth=cH; maxTowers=mT; String levelPath=path; maxEnemy = mE; buildLevel(); } public void init(){ setSize(sqrSize*15+200,sqrSize*15);//set the size of the screen roadCol = new Color(255,216,0);//set the colors for the different objects startCol = new Color(0,38,255); finCol = new Color(255,0,0); selRoad = new Color(242,204,155);//selColor is the color of something when your mouse hovers over it selGrass = new Color(0,190,0); roadpic = getImage(getDocumentBase(),"images/road.jpg"); towerpic = getImage(getDocumentBase(),"images/tower.png"); backpic = getImage(getDocumentBase(),"images/grass.jpg"); levelPic = getImage(getDocumentBase(),"images/level.jpg"); e= new Enemy[maxEnemy.length][];//activates all of the enimies for (int r=0;r<e.length;r++) e[r] = new Enemy[maxEnemy[r]]; t= new Tower[maxTowers]; for (int i=0;i<t.length;i++) t[i]= new Tower();//activates all the towers for (int i=0;i<e.length; i++)//sets all of the enimies starting co ordinates for (int j=0;j<e[i].length;j++) e[i][j] = new Enemy(eStartX,eStartY,world); initButtons();//initialise all the buttons addMouseMotionListener(this); addMouseListener(this); }

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • await, WhenAll, WaitAll, oh my!!

    - by cibrax
    If you are dealing with asynchronous work in .NET, you might know that the Task class has become the main driver for wrapping asynchronous calls. Although this class was officially introduced in .NET 4.0, the programming model for consuming tasks was much more simplified in C# 5.0 in .NET 4.5 with the addition of the new async/await keywords. In a nutshell, you can use these keywords to make asynchronous calls as if they were sequential, and avoiding in that way any fork or callback in the code. The compiler takes care of the rest. I was yesterday writing some code for making multiple asynchronous calls to backend services in parallel. The code looked as follow, var allResults = new List<Result>(); foreach(var provider in providers) { var results = await provider.GetResults(); allResults.AddRange(results); } return allResults; You see, I was using the await keyword to make multiple calls in parallel. Something I did not consider was the overhead this code implied after being compiled. I started an interesting discussion with some smart folks in twitter. One of them, Tugberk Ugurlu, had the brilliant idea of actually write some code to make a performance comparison with another approach using Task.WhenAll. There are two additional methods you can use to wait for the results of multiple calls in parallel, WhenAll and WaitAll. WhenAll creates a new task and waits for results in that new task, so it does not block the calling thread. WaitAll, on the other hand, blocks the calling thread. This is the code Tugberk initially wrote, and I modified afterwards to also show the results of WaitAll. class Program { private static Func<Stopwatch, Task>[] funcs = new Func<Stopwatch, Task>[] { async (watch) => { watch.Start(); await Task.Delay(1000); Console.WriteLine("1000 one has been completed."); }, async (watch) => { await Task.Delay(1500); Console.WriteLine("1500 one has been completed."); }, async (watch) => { await Task.Delay(2000); Console.WriteLine("2000 one has been completed."); watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds + "ms has been elapsed."); } }; static void Main(string[] args) { Console.WriteLine("Await in loop work starts..."); DoWorkAsync().ContinueWith(task => { Console.WriteLine("Parallel work starts..."); DoWorkInParallelAsync().ContinueWith(t => { Console.WriteLine("WaitAll work starts..."); WaitForAll(); }); }); Console.ReadLine(); } static async Task DoWorkAsync() { Stopwatch watch = new Stopwatch(); foreach (var func in funcs) { await func(watch); } } static async Task DoWorkInParallelAsync() { Stopwatch watch = new Stopwatch(); await Task.WhenAll(funcs[0](watch), funcs[1](watch), funcs[2](watch)); } static void WaitForAll() { Stopwatch watch = new Stopwatch(); Task.WaitAll(funcs[0](watch), funcs[1](watch), funcs[2](watch)); } } After running this code, the results were very concluding. Await in loop work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 4532ms has been elapsed. Parallel work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 2007ms has been elapsed. WaitAll work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 2009ms has been elapsed. The await keyword in a loop does not really make the calls in parallel.

    Read the article

  • PanelGridLayout - A Layout Revolution

    - by Duncan Mills
    With the most recent 11.1.2 patchset (11.1.2.3) there has been a lot of excitement around ADF Essentials (and rightly so), however, in all the fuss I didn't want an even more significant change to get missed - yes you read that correctly, a more significant change! I'm talking about the new panelGridLayout component, I can confidently say that this one of the most revolutionary components that we've introduced in 11g, even though it sounds rather boring. To be totally accurate, panelGrid was introduced in 11.1.2.2 but without any presence in the component palette or other design time support, so it was largely missed unless you read the release notes. However in this latest patchset it's finally front and center. Its time to explore - we (really) need to talk about layout.  Let's face it,with ADF Faces rich client, layout is a rather arcane pursuit, once you are a layout master, all bow before you, but it's more of an art than a science, and it is often, in fact, way too difficult to achieve what should (apparently) be a pretty simple. Here's a great example, it's a homework assignment I set for folks I'm teaching this stuff to:  The requirements for this layout are: The header is 80px high, the footer is 30px. These are both fixed.  The first section of the header containing the logo is 180px wide The logo is centered within the top left hand corner of the header  The title text is start aligned in the center zone of the header and will wrap if the browser window is narrowed. It should be aligned in the center of the vertical space  The about link is anchored to the right hand side of the browser with a 20px gap and again is center aligned vertically. It will move as the browser window is reduced in width. The footer has a right aligned copyright statement, again middle aligned within a 30px high footer region and with a 20px buffer to the right hand edge. It will move as the browser window is reduced in width. All remaining space is given to a central zone, which, in this case contains a panelSplitter. Expect that at some point in time you'll need a separate messages line in the center of the footer.  In the homework assigment I set I also stipulate that no inlineStyles can be used to control alignment or margins and no use of other taglibs (e.g. JSF HTML or Trinidad HTML). So, if we take this purist approach, that basic page layout (in my stock solution) requires 3 panelStretchLayouts, 5 panelGroupLayouts and 4 spacers - not including the spacer I use for the logo and the contents of the central zone splitter - phew! The point is that even a seemingly simple layout needs a bit of thinking about, particulatly when you consider strechting and browser re-size behavior. In fact, this little sample actually teaches you much of what you need to know to become vaguely competant at layouts in the framework. The underlying result of "the way things are" is that most of us reach for panelStretchLayout before even finishing the first sip of coffee as we embark on a new page design. In fact most pages you will see in any moderately complex ADF page will basically be nested panelStretchLayouts and panelGroupLayouts, sometimes many, many levels deep. So this is a problem, we've known this for some time and now we have a good solution. (I should point out that the oft-used Trinidad trh tags are not a particularly good solution as you're tie-ing yourself to an HTML table based layout in that case with a host of attendent issues in resize and bi-di behavior, but I digress.) So, tadaaa, I give to you panelGridLayout. PanelGrid, as the name suggests takes a grid like (dare I say slightly gridbag-like) approach to layout, dividing your layout into rows and colums with margins, sizing, stretch behaviour, colspans and rowspans all rolled in, all without the use of inlineStyle. As such, it provides for a much more powerful and consise way of defining a layout such as the one above that is actually simpler and much more logical to design. The basic building blocks are the panelGridLayout itself, gridRow and gridCell. Your content sits inside the cells inside the rows, all helpfully allowing both streching, valign and halign definitions without the need to nest further panelGroupLayouts. So much simpler!  If I break down the homework example above my nested comglomorate of 12 containers and spacers can be condensed down into a single panelGrid with 3 rows and 5 cell definitions (39 lines of source reduced to 24 in the case of the sample). What's more, the actual runtime representation in the browser DOM is much, much simpler, and clean, with basically one DIV per cell (Note that just because the panelGridLayout semantics looks like an HTML table does not mean that it's rendered that way!) . Another hidden benefit is the runtime cost. Because we can use a single layout to achieve much more complex geometries the client side layout code inside the browser is having to work a lot less. This will be a real benefit if your application needs to run on lower powered clients such as netbooks or tablets. So, it's time, if you're on 11.1.2.2 or above, to smile warmly at your panelStretchLayouts, wrap the blanket around it's knees and wheel it off to the Sunset Retirement Home for a well deserved rest. There's a new kid on the block and it wants to be your friend. 

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

  • CSS3 - "connecting" 2 classes animation [closed]

    - by Nave Tseva
    I have this CSS +HTML code: <!DOCTYPE HTML> <html> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> <title>What</title> <style type="text/css"> #page { width: 900px; padding: 0px; margin: 0 auto; direction: rtl; position: relative; } #box1 { position: relative; width: 500px; border: 1px solid black; box-shadow: -3px 8px 34px #808080; border-radius: 20px; box-shadow: -8px 5px 5px #888888; right: 300px; top: 250px; height: 150px; -webkit-transition: all 1s; font-size: large; color: Black; padding: 10px; background: #D0D0D0; opacity: 0; } @-webkit-keyframes myFirst { 0% { right: 300px; top: 150px; background: #D0D0D0; opacity: 0; } 100% { background: #909090; ; right: 300px; top: 200px; opacity: 1; } } #littlebox1 { top: 200px; position: absolute; display: inline-block; } .littlebox1-sentence { font-size: large; padding-bottom: 15px; padding-top: 15px; padding-left: 25px; padding-right: 10px; background: #D0D0D0; border-radius: 10px; -webkit-transition: background .25s ease-in-out; } #littlebox1:hover ~ #box1 { -webkit-transition: all 0s; background: #909090;; right: 300px; top: 200px; -webkit-animation: myFirst 1s; -webkit-animation-fill-mode: initial; opacity: 1; } .littlebox1-sentence:hover { background: #909090; } .littlebox1-sentence:hover + .triangle { border-right: 50px solid #909090; } .triangle { position: relative; width: 0; height: 0; border-right: 50px solid #D0D0D0; border-top: 24px solid transparent; border-bottom: 24px solid transparent; right: 160px; -webkit-transition: border-right .25s ease-in-out; } .triangle:hover { border-right:50px solid #909090; } </style> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script> <script> $(function() { $('.littlebox1-sentence').hover(function() { $(this).css('background', '#909090'); $('.triangle').css('border-right', '50px solid #909090'); }); </script> <script> $(function() { $('.triangle').hover(function() { $(this).css('border-right', '50px solid #909090'); $('.littlebox1-sentence').css('background', '#909090'); }); </script> </head> <body dir="rtl"> <div id="page"> <div id="littlebox1" class="littlebox1-sentence">put your mouse here</div><div id="littlebox1" class="triangle"> </div> <div id="box1"> </div> </div> </body> </html> Live example you will find here: http://jsfiddle.net/FLe4g/12/ The problem here that something here wrong in the second jquery code. I want that every time that I put the mouse on the box, or on the triangke they both will change ther color together. when I put the mouse on the box it works fine, but when I put the mouse on the triangle it don't work. Any suggestions how to fix this code?

    Read the article

  • Multithreading in lwjgl getting rid of sleep.

    - by pangaea
    I'm trying to use multithreading in my game. However, I can't seem to get rid of the sleep. If I don't it's a blank screen, as there is no time for the computer to actually render the triangleMob as it can't access getArrayList(), in my main class I have a TriangleMob arraylist. If I delay it, then it can access the previousMob and it renders. If I don't, then it's blank screen. Can I get rid of the delay? Also, is this a bad way to multithread? Surely, this should be fast. I need multithreading so can you please not suggest not using it. public class TriangleMob extends Thread implements Runnable { private static int count=0; private int objectDisplayList; private static ArrayList<TriangleMob> previousMob = new ArrayList<TriangleMob>(); private static ArrayList<TriangleMob> currentMob = new ArrayList<TriangleMob>(); private static ArrayList<TriangleMob> laterMob = new ArrayList<TriangleMob>(); private Vector3f position = new Vector3f(0f,0f,0f); private Vector3f movement = new Vector3f(0f,0f,0f); public TriangleMob() { // Create the display list CreateDisplayList(); count++; } public TriangleMob(Vector3f position) { // Create the display list CreateDisplayList(); this.position = position; count++; } private void CreateDisplayList() { objectDisplayList = glGenLists(1); glNewList(objectDisplayList, GL_COMPILE); { double topPoint = 0.75; glBegin(GL_TRIANGLES); glColor4f(1, 1, 0, 1f); glVertex3d(0, topPoint, -5); glColor4f(0, 0, 1, 1f); glVertex3d(-1, -0.75, -4); glColor4f(0, 0, 1, 1f); glVertex3d(1, -.75, -4); glColor4f(1, 1, 0, 1f); glVertex3d(0, topPoint, -5); glColor4f(0, 0, 1, 1f); glVertex3d(1, -0.75, -4); glColor4f(0, 0, 1, 1f); glVertex3d(1, -0.75, -6); glColor4f(1, 1, 0, 1f); glVertex3d(0, topPoint, -5); glColor4f(0, 0, 1, 1f); glVertex3d(1, -0.75, -6); glColor4f(0, 0, 1, 1f); glVertex3d(-1, -.75, -6); glColor4f(1, 1, 0, 1f); glVertex3d(0, topPoint, -5); glColor4f(0, 0, 1, 1f); glVertex3d(-1, -0.75, -6); glColor4f(0, 0, 1, 1f); glVertex3d(-1, -.75, -4); glEnd(); glColor4f(1, 1, 1, 1); } glEndList(); } public static int getCount() { return count; } public Vector3f getMovement() { return movement; } public Vector3f getPosition() { return position; } public synchronized int getObjectList() { return objectDisplayList; } public synchronized static ArrayList<TriangleMob> getArrayList(){ if(previousMob != null) { return previousMob; } previousMob.add(new TriangleMob()); return previousMob; } public synchronized void move(Vector3f movement) { // If you want to move in all 3 axis position.x += movement.x; position.y += movement.y; position.z += movement.z; } public synchronized void render() { glPushMatrix(); glTranslatef(-position.x, -position.y, -position.z); glCallList(objectDisplayList); glPopMatrix(); } public synchronized static void setTriangleMob(ArrayList<TriangleMob> triangleMobSet) { laterMob = triangleMobSet; } private synchronized void setPreTriangleMob(ArrayList<TriangleMob> currentMob2) { previousMob = currentMob2; } public void run(){ while(true) { if(laterMob == null) { currentMob = laterMob; System.out.println("Copying"); } for(int i=0; i<currentMob.size(); i++) { currentMob.get(i).move(new Vector3f(0.1f,0.01f,0.01f)); } setPreTriangleMob(currentMob); try { sleep(1L); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } }

    Read the article

  • Styling ASP.NET MVC Error Messages

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/styling-asp.net-mvc-error-messages.aspxOff the cuff, it may look like you’re stuck with the presentation of your error messages (model errors) in ASP.NET MVC. That’s not the case, though. You actually have quite a number of options with regard to styling those boogers. Like many of the helpers in MVC, the Html.ValidationMessageFor helper has multiple prototypes. One of those prototypes lets you pass a dictionary, or anonymous object, representing attribute values for the resulting markup. @Html.ValidationMessageFor( m => Model.Whatever, null, new { @class = “my-error” }) By passing the htmlAttributes parameter, which is the last parameter in the call to the prototype of Html.ValidationMessageFor shown above, I can style the resulting markup by associating styles to the my-error css class.  When you run your MVC project and view the source, you’ll notice that MVC adds the class field-validation-valid or field-validation-error to a span created by the helper. You could actually just style those classes instead of adding your own…it’s really up to you. Now, what if you wanted to move that error message around? Maybe you want to put that error message in a box or a callout. How do you do that? When I first started using MVC, it didn’t occur to me that the Html.ValidationMessageFor helper just spits out a little bit of markup. I wanted to put the error messages in boxes with white backgrounds, our site originally had a black background, and show a little nib on the side to make them look like callouts or conversation bubbles. Not realizing how much freedom there is in the styling and markup, and after reading someone else’s post, I created my own version of the ValidationMessageFor helper that took out the span and replaced it with divs. I styled the divs to produce the effect of a popup box and had a lot of trouble with sizing and such. That’s a really silly and unnecessary way to solve this problem. If you want to move your error messages around, all you have to do is move the helper. MVC doesn’t appear to care where you put it, which makes total sense when you think about it. Html.ValidationMessageFor is just spitting out a little markup using a little bit of reflection on the name you’re passing it. All you’ve got to do to style it the way you want it is to put it in whatever markup you desire. Take a look at this, for example… <div class=”my-anchor”>@Html.ValidationMessageFor( m => Model.Whatever )</div> @Html.TextBoxFor(m => Model.Whatever) Now, given that bit of HTML, consider the following CSS… <style> .my-anchor { position:relative; } .field-validation-error {    background-color:white;    border-radius:4px;    border: solid 1px #333;    display: block;    position: absolute;    top:0; right:0; left:0;    text-align:right; } </style> The my-anchor class establishes an anchor for the absolutely positioned error message. Now you can move the error message wherever you want it relative to the anchor. Using css3, there are some other tricks. For example, you can use the :not(:empty) selector to select the span and apply styles based upon whether or not the span has text in it. Keep it simple, though. Moving your elements around using absolute positioning may cause you issues on devices with screens smaller than your standard laptop or PC. While looking for something else recently, I saw someone asking how to style the output for Html.ValidationSummary.  Html.ValidationSummery is the helper that will spit out a list of property errors, general model errors, or both. Html.ValidationSummary spits out fairly simple markup as well, so you can use the techniques described above with it also. The resulting markup is a <ul><li></li></ul> unordered list of error messages that carries the class validation-summary-errors In the forum question, the user was asking how to hide the error summary when there are no errors. Their errors were in a red box and they didn’t want to show an empty red box when there aren’t any errors. Obviously, you can use the css3 selectors to apply different styles to the list when it’s empty and when it’s not empty; however, that’s not support in all browsers. Well, it just so happens that the unordered list carries the style validation-summary-valid when the list is empty. While the div rendered by the Html.ValidationSummary helper renders a visible div, containing one invisible listitem, you can always just style the whole div with “display:none” when the validation-summary-valid class is applied and make it visible when the validation-summary-errors class is applied. Or, if you don’t like that solution, which I like quite well, you can also check the model state for errors with something like this… int errors = ViewData.ModelState.Sum(ms => ms.Value.Errors.Count); That’ll give you a count of the errors that have been added to ModelState. You can check that and conditionally include markup in your page if you want to. The choice is yours. Obviously, doing most everything you can with styles increases the flexibility of the presentation of your solution, so I recommend going that route when you can. That picture of the fat guy jumping has nothing to do with the article. That’s just a picture of me on the roof and I thought it was funny. Doesn’t every post need a picture?

    Read the article

  • IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management?

    - by andrewbrust
    Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days.  But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market. If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not.  HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic.  Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good. But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks. What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market. Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too.  When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors. The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

    Read the article

  • CSS hover behavior inconsistent on desktop/mobile devices [migrated]

    - by tbart
    I have a strange problem: This page looks good on desktop browsers, but the hovering effect does not seem to work correctly on at least my CM7 Android 2.3.7 device. I know hovering is not supposed to work on touch displays as it does with a mouse, but I'd like to have touch feedback, i.e. the highlight color should show once the user has tapped a menu item. This does work when the link is just a href="#" but it does not when it is a real link. I tried all sorts of stuff as you can see, to no avail. If you go back in the browser history after having tapped a real link, the item is highlighted, so the browser understands the CSS I am throwing at it. However, the javascript alert makes it clear that it only seems to interpret the link opening action and does not care about the color changing stuff. Weird that is. Workarounds welcome, preferable without javascript, but if it has to be JS, then go ahead! either go here: http://orpheus.co.at/hoverprob and Use the source, Luke! or see it here in all its glory: <html> <head> <meta name="viewport" content="width=320"> <style> #nav, #nav ul { width: 100%; float: left; list-style: none; line-height: 1; background: #fff; font-weight: bold; padding: 0; margin: 0 0 5px 0; } #nav a { display: block; color: #001834; text-decoration: none; padding: 5px 7px; } #nav li { float: left; padding: 0; width: 33%; } #nav li ul { position: absolute; left: -9999px; height: auto; margin: 0; opacity: .95; width: 100%; } #nav li a { text-align: center; height: 20px; line-height: 20px; } #nav li ul li a { text-align: left; } #nav li ul li { float: none; /* width: 316px; */ width: 100%; } #nav li:hover ul ul, #nav li:hover ul ul ul, #nav li.sfhover ul ul, #nav li.sfhover ul ul ul { left: -9999px; } #nav li:hover ul, #nav li li:hover ul, #nav li li li:hover ul, #nav li.sfhover ul, #nav li li.sfhover ul, #nav li li li.sfhover ul { left: 0; } #nav li.educate { background: #FFF0B8; /* background: #FF0000; */ /* border-radius: 5px; */ border: 5px; } #nav li.educate:hover { background: #FFCE00; /* border-radius: 5px; */ } </style> </head> <body> <div id="mobMenu"> <ul id="nav" class="nav"> <li class="educate"><a href="#">menu</a> <ul class="educate"> <li class="educate"><a href="#">href=&quot#&quot;, works</a></li> <!--(+emtpy onmouseover for iPose devices)--> <li class="educate"><a onmouseover="" href="index.html">does not work, real link</a></li> <li class="educate" id="bla"><a onmousedown="document.getElementById('bla').style.backgroundColor='Blue'; alert('Done');document.location='index.html';" href="#">JS, not interpreted in corr order</a></li> </ul> </li> </div> </body>

    Read the article

  • ASP.NET: Including JavaScript libraries conditionally from CDN

    - by DigiMortal
    When developing cloud applications it is still useful to build them so they can run also on local machine without network connection. One thing you use from CDN when in cloud and from app folder when not connected are common JavaScript libraries. In this posting I will show you how to add support for local and CDN script stores to your ASP.NET MVC web application. Our solution is simple. We will add new configuration setting to our web.config file (including cloud transform file of it) and new property to our web application. In master page where scripts are included we will include scripts from CDN conditionally. There is nothing complex, all changes we make are simple ones. 1. Adding new property to web application Although I am using ASP.NET MVC web application these modifications work also very well with ASP.NET Forms. Open Global.asax and add new static property to your application class. public static bool UseCdn {     get     {         var valueString = ConfigurationManager.AppSettings["useCdn"];         bool useCdn;           bool.TryParse(valueString, out useCdn);         return useCdn;     } } If you want less round-trips to configuration data you can keep some nullable boolean in your application class scope and load CDN setting first time it is asked. 2. Adding new configuration setting to web.config By default my application uses local scripts. Although my application runs on cloud I can do a lot of stuff without staging environment in cloud. So by default I don’t have costs on traffic when including scripts from application folders. <appSettings>   <add key="UseCdn" value="false" /> </appSettings> You can also set UseCdn value to true and change it to false when you are not connected to network. 3. Modifying web.config cloud transform I have special configuration for my solution that I use when deploying my web application to cloud. This configuration is called Cloud and transform for this configuration is located in web.cloud.config. To make application using CDN when deployed to cloud we need the following transform. <appSettings>   <add key="UseCdn"        value="true"        xdt:Transform="SetAttributes"        xdt:Locator="Match(key)" /> </appSettings> Now when you publish your application to cloud it uses CDN by default. 4. Including scripts in master pages The last thing we need to change is our master page. My solution is simple. I check if I have to include scripts from CDN and if it is true then I include scripts from there. Otherwise my scripts will be included from application folder. @if (MyWeb.MvcApplication.UseCdn) {     <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.4.min.js" type="text/javascript"></script> } else {     <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> } Although here is only one script shown you can add all your scripts that are also available in some CDN in this if-else block. You are free to include scripts from different CDN services if you need. Conclusion As we saw it was very easy to modify our application to make it use CDN for JavaScript libraries in cloud and local scripts when run on local machine. We made only small changes to our application code, configuration and master pages to get different script sources supported. Our application is now more independent from external sources when we are working on it.

    Read the article

  • ?Linux 6???UDEV??RAC ASM???????

    - by Liu Maclean(???)
    Maclean?????UDEV??ASMLIB?RAC???????????,???????????????????:Why ASMLIB and why not???UDEV????RAC ASM?????  ?«??UDEV????RAC ASM????? »???????????????,????????udev rule????: for i in b c d e f g h i j k ; do echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" done ?????Linux 5?????, ????????redhat/Oracle Linux 6???????????? ????: ?OEL6??RHEL6?,????????? ??????:1. scsi_id??????????,scsi_id -g -u -s??????????2. udevtest???????,????udevadm??How to use udev for Oracle ASM in Oracle Linux 6   ???????????,???redhat/Oracle Linux 6??????udev rule ????: 1. #????? Linux 6.0???? [root@vrh6 dev]# cat /etc/issue Oracle Linux Server release 6.2 Kernel \r on an \m 2. #?????/etc/scsi_id.config echo "options=--whitelisted --replace-whitespace" >> /etc/scsi_id.config 3. #?????????udev?? [root@vrh6 dev]# ls -l sd* brw-rw----. 1 root disk 8, 0 Jun 30 09:29 sda brw-rw----. 1 root disk 8, 1 Jun 30 09:29 sda1 brw-rw----. 1 root disk 8, 2 Jun 30 09:29 sda2 brw-rw----. 1 root disk 8, 16 Jun 30 09:29 sdb brw-rw----. 1 root disk 8, 32 Jun 30 09:29 sdc brw-rw----. 1 root disk 8, 48 Jun 30 09:29 sdd brw-rw----. 1 root disk 8, 64 Jun 30 09:29 sde brw-rw----. 1 root disk 8, 80 Jun 30 09:29 sdf ??????? sdb-> sdf???????? 4. ? b->f?????for ???,??: # AUTO UDEV RULE BY Maclean Liu 2012/06/30 for i in b c d e f ; do echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" done ????sdb->sdf ?????RULE,????RULE???/etc/udev/rules.d/99-oracle-asmdevices.rules? ??????????? ,??RULE?99-oracle-asmdevices.rules # AUTO UDEV RULE BY Maclean Liu 2012/06/30 for i in b c d e f ; do echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules done 5. ?????root??/sbin/start_udev ?? ??????: [root@vrh6 dev]# echo "options=--whitelisted --replace-whitespace" >> /etc/scsi_id.config [root@vrh6 dev]# for i in b c d e f ; > do > echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules > done [root@vrh6 dev]# [root@vrh6 dev]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB09cadb31-cfbea255", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB5f097069-59efb82f", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB4e1a81c0-20478bc4", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBdcce9285-b13c5a27", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB82effe1a-dbca7dff", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660" [root@vrh6 dev]# [root@vrh6 dev]# /sbin/start_udev Starting udev: [ OK ] [root@vrh6 dev]# ls -l asm* brw-rw----. 1 grid asmadmin 8, 16 Jun 30 09:34 asm-diskb brw-rw----. 1 grid asmadmin 8, 32 Jun 30 09:34 asm-diskc brw-rw----. 1 grid asmadmin 8, 48 Jun 30 09:34 asm-diskd brw-rw----. 1 grid asmadmin 8, 64 Jun 30 09:34 asm-diske brw-rw----. 1 grid asmadmin 8, 80 Jun 30 09:34 asm-diskf

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >