Search Results

Search found 139 results on 6 pages for 'mega venik'.

Page 5/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • JQuery Hover li Show div which sits outside li structure

    - by Dave_Stott
    Hi everyone I'm currently trying to create a "mega" dropout menu using JQuery but have encountered an issue I'm yet to be able to resolve. At the moment I have the following HTML structure: <div id="TopNav" class="grid_16"> <ul class="cmsListMenuUL level0" id="TopNavMenu"> <li class="cmsListMenuLIcmsListMenuLI highlightedLI" id="TopNavMenu_Home"><a href="/"> <span class="text">Home</span></a></li> <li class="cmsListMenuLIfirst" id="TopNavMenu_0_1"><a href="/Key-Sectors.aspx" class="cmsListMenuLink"> <span class="text">Key Sectors</span></a></li> <li class="cmsListMenuLI" id="TopNavMenu_0_2"><a href="/Global-Brands.aspx" class="cmsListMenuLink"> <span class="text">Global Brands</span></a></li> <li class="cmsListMenuLI" id="TopNavMenu_0_3"><a href="/News---Features.aspx" class="cmsListMenuLink"> <span class="text">News &amp; Features</span></a></li> <li class="cmsListMenuLI" id="TopNavMenu_0_4"><a href="/Videos.aspx" class="cmsListMenuLink"> <span class="text">Videos</span></a></li> <li class="cmsListMenuLI" id="TopNavMenu_0_5"><a href="/Events.aspx" class="cmsListMenuLink"> <span class="text">Events</span></a></li> <li class="cmsListMenuLI" id="TopNavMenu_0_6"><a href="/Key-Cities.aspx" class="cmsListMenuLink"> <span class="text">Key Cities</span></a></li> <li class="cmsListMenuLI" id="TopNavMenu_0_7"><a href="/Doing-Business-in-Yorkshire.aspx" class="cmsListMenuLink"><span class="text">Doing Business in Yorkshire</span></a></li> <li class="cmsListMenuLI" id="TopNavMenu_0_8"><a href="/How-We-Can-Help.aspx" class="cmsListMenuLink"> <span class="text">How We Can Help</span></a></li> <li class="cmsListMenuLI" id="TopNavMenu_0_9"><a href="/Contact-Us.aspx" class="cmsListMenuLink"> <span class="text">Contact Us</span></a></li> </ul> </div> <div class="sectorsDropped"> <div class="floatLeft leftColumn"> <div class="parentItem" style="border-color: #0064BE;"> <a href="/Key-Sectors/Advanced-Engineering---Materials.aspx" class="parentItemContent"> Advanced Engineering &amp; Materials</a><div class="childItem"> <a href="/Key-Sectors/Advanced-Engineering---Materials/Nuclear.aspx">- Nuclear</a></div> <div class="childItem"> <a href="/Key-Sectors/Advanced-Engineering---Materials/Logistics---Infrastructure.aspx"> - Logistics &amp; Infrastructure</a></div> </div> <div class="parentItem" style="border-color: #FFB611;"> <a href="/Key-Sectors/Chemicals.aspx" class="parentItemContent">Chemicals</a></div> <div class="parentItem" style="border-color: #B7CC0B;"> <a href="/Key-Sectors/Environmental-Technologies.aspx" class="parentItemContent">Environmental Technologies</a><div class="childItem"> <a href="/Key-Sectors/Environmental-Technologies/Offshore-Wind.aspx">- Offshore Wind</a></div> <div class="childItem"> <a href="/Key-Sectors/Environmental-Technologies/Carbon-Capture---Storage.aspx">- Carbon Capture &amp; Storage</a></div> <div class="childItem"> <a href="/Key-Sectors/Environmental-Technologies/Tidal-Power.aspx">- Tidal Power</a></div> <div class="childItem"> <a href="/Key-Sectors/Environmental-Technologies/Biomass.aspx">- Biomass</a></div> </div> </div> <div class="floatLeft rightColumn"> <div class="parentItem" style="border-color: #AC26AA;"> <a href="/Key-Sectors/Digital---New-Media.aspx" class="parentItemContent">Digital &amp; New Media</a></div> <div class="parentItem" style="border-color: #e1477e;"> <a href="/Key-Sectors/Food---Drink.aspx" class="parentItemContent">Food &amp; Drink</a></div> <div class="parentItem" style="border-color: #00c5b5;"> <a href="/Key-Sectors/Healthcare-Technologies.aspx" class="parentItemContent">Healthcare Technologies</a><div class="childItem"> <a href="/Key-Sectors/Healthcare-Technologies/Biotechnology.aspx">- Biotechnology</a></div> <div class="childItem"> <a href="/Key-Sectors/Healthcare-Technologies/Pharmaceuticals.aspx">- Pharmaceuticals</a></div> <div class="childItem"> <a href="/Key-Sectors/Healthcare-Technologies/Medical-Devices.aspx">- Medical Devices</a></div> </div> <div class="parentItem" style="border-color: #AC1A2F;"> <a href="/Key-Sectors/Financial---Professional.aspx" class="parentItemContent">Financial &amp; Professional</a></div> </div> </div> In normal circumstances the div containing the "mega" menu options would sit inside the li item that fires the show/hide but this is currently not possible as the ul list of navigation links is rendered using a 3rd party piece of software which does not provide an equivalent of an OnItemDataBound event for me to be able to inject the div into the item Does anyone know of a way, using JQuery, of showing the div but maintain the display of the div as the mouse focus leaves the li that originaly displayed the div and actually enters the div? I'm currently using the following JQuery which displays the div correctly but as the mouse focus enters the div the div then disappears as the mouse focus from the li has now moved: $(document).ready(function() { function addMega(){ $(".sectorsDropped").toggle("fast"); } function removeMega(){ $(".sectorsDropped").toggle("fast"); } var megaConfig = { interval: 500, sensitivity: 4, over: addMega, timeout: 500, out: removeMega }; $("#TopNavMenu_0_1").hoverIntent(megaConfig) }); Thanks Dave

    Read the article

  • SQLAuthority News – Speaking Sessions at TechEd India – 3 Sessions – 1 Panel Discussion

    - by pinaldave
    Microsoft Tech-Ed India 2010 is considered as the major Technology event of the year for various IT professionals and developers. This event will feature a comprehensive forum in order   to learn, connect, explore, and evolve the current technologies we have today. I would recommend this event to you since here you will learn about today’s cutting-edge trends, thereby enhancing your work profile and getting ahead of the rest. But, the most important benefit of all might be the networking opportunity that that you can attain by attending the forum. You can build personal connections with various Microsoft experts and peers that will last even far beyond this event! It also feels good to let you know that I will be speaking at this year’s event! So, here are the sessions that await you in this mega-forum. Session 1: True Lies of SQL Server – SQL Myth Buster Date: April 12, 2010  Time: 11:15pm – 11:45pm In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myth and their resolution backing up with some demo. This demo session is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet  fun session. Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Panel Discussion: Harness the power of Web – SEO and Technical Blogging Date: April 12, 2010 Time: 5:00pm-6:00pm Here you will learn lots of tricks and tips about SEO and Technical Blogging from various Industry Technical Blogging Experts. This event will surely be one of the most important Tech conventions of 2010. TechEd is going to be a very busy time for Tech developers and enthusiasts, since every evening there will be a fun session to attend. If you are interested in any of the above topics for every session, I suggest that you visit each of them as you will learn so many things about the topic to be discussed. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • 5 reason why you should upgrade to new iPad (3rd generation)

    - by Gopinath
    Apple released the new iPad, 3rd generation, couple of days ago and they will be available in stores from March 16 onwards.  It’s the best tablet available in the market and for first time buyers it’s a no brainer to choose it. What about the iPad owners? Should they upgrade their iPad 2 to the new iPad? This is the question on the lips of most of the iPad owners. In this post we will provide you 5 reasons why you should upgrade your iPad, if more than two reasons are convincing then you should upgrade to the new iPad. Retina display – The best display ever made for mobile device, a game changer The new iPad comes with Retina display with screen resolution of 2048 x 1536, which is twice the resolution of iPad 2. Undoubtedly the iPad 3’s display is the best display ever made for a mobile device and it’s a game changer. With better resolution on iPad 3 eBook reading is going to be a pleasure with clear and crisp text Watching HD movies on iPad is going to be unbelievably good The new Games targeted for Retina display are going to be more realistic and needless to explain the pleasure of playing such games Graphic artists and photo editors get a professional on screen rendering support to create beautiful graphics 2x Faster & 2x Memory – Better Games and powerful Apps The new iPad is more powerful with 2x faster graphics and 2x more memory. Apple claims that the A5x processor of new iPad is 2x faster than iPad 2 and 4x faster than the best graphic chips available from other vendors. The RAM of  new iPad  is upgraded to 1 GB compared from 512 MB of iPad 2. With the fast processor and more memory, Apps and games are going to be blazing fast. 4G Internet – Browse the web at the speeds of 42 MB/sec Half of the iPad owners are frequent commuters who access internet over cellular networks, the new iPad’s 4G LTE is going to be a big boom for their  high data access needs. With the new iPad’s 4G LTE connectivity you can browse the web at 42 MB/sec and it mean you can watch a HD video without buffering issues. iPad 2 comes with 3G network support and it’s browsing speeds are way less than the new iPad. 5MP Camera – HD Movie Recording & gorgeous Photography iPad 2 has a 0.7 mega pixel camera and the new iPad comes with 5 megapixels camera. That is a huge boost for hobbyist  photographers and videographers. With the new iPad you can shoot gorgeous photos and 1080p HD video. The iSight camera of new iPad uses advanced optics with features like auto exposure, auto focus and face detection up to 10 faces. Amazon Pays up to $300 for old iPad 2 16 GB Wifi and more for other models Do you know that you can trade in your iPad 2 16 GB Wifi for upto $300? Amazon has an excellent trade in program for selling your used iPad 2s. Depending on the condition of the iPad 2  Amazon offers $234, $270, $300.00 for 16 GB Wifi versions that in Acceptable, Good and Like New conditions respectively.  The higher models of iPad 2s fetch you more money. With this great deal from Amazon the amount of extra money you need to spend for new iPad is almost half of their price. Visit Amazon Trade In’s website or read Amazon’s brilliant plan to pay you crazy money for your iPad 2 for more details. Related: New IPad Vs. IPad 2–Side By Side Comparison Of Hardware Specification [Infographic]

    Read the article

  • Managing game state / 'what to update' within an XNA game 'screen'

    - by codinghands
    Note - having read through other GDev questions suggested when writing this question I'm confident this isn't a dupe. Of course, it's 3am and I'm likely wrong, so please mod as such if so. I'm trying to figure out how best to manage state within my game screens - please bare with me though! At the moment I'm using a heavily modified version of the fantastic game state management example on the XNA site available here. This is working perfectly for my 'Screens' - 'IntroScreen' with some shiny logos, 'TitleScreen' and a 'MenuScreen' stacked on top for the title and menu, 'PlayScreen' for the actual gameplay, etc. Each screen has the a bunch of sprites, and an 'Update' and 'Draw', managed by a 'ScreenManager'. In addition to the above, and as suggested as an answer to my other question here, most screens have a 'GameProcessQueue' class full of 'GameProcess'es which lets me do just about anything (animations, youbetcha!), in any order, in sequence or parallel. Why mention all this? When I talk about managing game state I'm thinking more for complex scenarios within a 'Screen'. 'TitleScreen', 'MenuScreen' and the like are all relatively simple. 'Play Screen' less so. How do people manage the different 'states' within the screen (or whatever you call it) that 'does' gameplay? (for me, the 'PlayScreen') I've thought about the following: Enum of different states in the Screen, 'activeState' enum-type variable, switching on the enum in the Screen Update() loop to determine what Screen Update 'sub'-function is called. I can see this getting hairy pretty fast though as screens get more complex and with the 'PlayScreen' becoming a behemoth mega-class. 'State' class with Update loop - a Screen can have any number of 'States', 1+ of which are 'active'. Screen update loop calls update on all active states. States themselves know which screen they belong to, and may even belong to a 'StateManager' which handles transitioning from one state to the next. Once a state is over it's removed from the ScreenState list. The Screen doesn't need a bunch of GameProcessQueues, each State has its own. Abstract Screen further to be more flexible - I can see the similarities between what I've got (game 'Screens' handled by a ScreenManager) and what I want (states within a screen, and a mechanism to manage them). However at the moment I see 'Screens' as high level and very distinct ('PlayScreen' with baddies != 'MenuScreen' with 4 words and event handlers), where as my proposed 'States' are more intrinsically tied to a specific screen with complex requirements. I think. This is for a turn-based board game, so it's easier to define things as a discrete series of steps (IntroAnimation - P1Turn - P2Turn - P1Turn ... - GameOver - .... Obviously with an open-world RPG things are very different, but any advice in this scenario is appreciated. If I'm just going OOP-crazy please say so. Similarly I'm concious there's a huge amount on this site re: state management. But as my first 'serious' game after a couple of false starts I'd like to get this right, and would rather be harassed and modded down than never ask :)

    Read the article

  • The Future of Air Travel: Intelligence and Automation

    - by BobEvans
    Remember those white-knuckle flights through stormy weather where unexpected plunges in altitude result in near-permanent relocations of major internal organs? Perhaps there’s a better way, according to a recent Wall Street Journal article: “Pilots of a Honeywell International Inc. test plane stayed on their initial flight path, relying on the company's latest onboard radar technology to steer through the worst of the weather. The specially outfitted Boeing 757 barely shuddered as it gingerly skirted some of the most ferocious storm cells over Fort Walton Beach and then climbed above the rest in zero visibility.” Or how about the multifaceted check-in process, which might not wreak havoc on liver location but nevertheless makes you wonder if you’ve been trapped in some sort of covert psychological-stress test? Another WSJ article, called “The Self-Service Airport,” says there’s reason for hope there as well: “Airlines are laying the groundwork for the next big step in the airport experience: a trip from the curb to the plane without interacting with a single airline employee. At the airport of the near future, ‘your first interaction could be with a flight attendant,’ said Ben Minicucci, chief operating officer of Alaska Airlines, a unit of Alaska Air Group Inc.” And in the topsy-turvy world of air travel, it’s not just the passengers who’ve been experiencing bumpy rides: the airlines themselves are grappling with a range of challenges—some beyond their control, some not—that make profitability increasingly elusive in spite of heavy demand for their services. A recent piece in The Economist illustrates one of the mega-challenges confronting the airline industry via a striking set of contrasting and very large numbers: while the airlines pay $7 billion per year to third-party computerized reservation services, the airlines themselves earn a collective profit of only $3 billion per year. In that context, the anecdotes above point unmistakably to the future that airlines must pursue if they hope to be able to manage some of the factors outside of their control (e.g., weather) as well as all of those within their control (operating expenses, end-to-end visibility, safety, load optimization, etc.): more intelligence, more automation, more interconnectedness, and more real-time awareness of every facet of their operations. Those moves will benefit both passengers and the air carriers, says the WSJ piece on The Self-Service Airport: “Airlines say the advanced technology will quicken the airport experience for seasoned travelers—shaving a minute or two from the checked-baggage process alone—while freeing airline employees to focus on fliers with questions. ‘It's more about throughput with the resources you have than getting rid of humans,’ said Andrew O'Connor, director of airport solutions at Geneva-based airline IT provider SITA.” Oracle’s attempting to help airlines gain control over these challenges by blending together a range of its technologies into a solution called the Oracle Airline Data Model, which suggests the following steps: • To retain and grow their customer base, airlines need to focus on the customer experience. • To personalize and differentiate the customer experience, airlines need to effectively manage their passenger data. • The Oracle Airline Data Model can help airlines jump-start their customer-experience initiatives by consolidating passenger data into a customer data hub that drives realtime business intelligence and strategic customer insight. • Oracle’s Airline Data Model brings together multiple types of data that can jumpstart your data-warehousing project with rich out-of-the-box functionality. • Oracle’s Intelligent Warehouse for Airlines brings together the powerful capabilities of Oracle Exadata and the Oracle Airline Data Model to give you real-time strategic insights into passenger demand, revenues, sales channels and your flight network. The airline industry aside, the bullet points above offer a broad strategic outline for just about any industry because the customer experience is becoming pre-eminent in each and there is simply no way to deliver world-class customer experiences unless a company can capture, manage, and analyze all of the relevant data in real-time. I’ll leave you with two thoughts from the WSJ article about the new in-flight radar system from Honeywell: first, studies show that a single episode of serious turbulence can wrack up $150,000 in additional costs for an airline—so, it certainly behooves the carriers to gain the intelligence to avoid turbulence as much as possible. And second, it’s back to that top-priority customer-experience thing and the value that ever-increasing levels of intelligence can deliver. As the article says: “In the cabin, reporters watched screens showing the most intense parts of the nearly 10-mile wide storm, which churned some 7,000 feet below, in vibrant red and other colors. The screens also were filled with tiny symbols depicting likely locations of lightning and hail, which can damage planes and wreak havoc on the nerves of white-knuckle flyers.”  (Bob Evans is senior vice-president, communications, for Oracle.)  

    Read the article

  • Why is my jQuery jEditable, edit-in-place callback not working?

    - by unknowndomain
    I am using a jQuery jEditable edit-in-place field but when the JSON function returns and I try to return a value from my edit-in-place callback all I get is a flash of You can see this here... http://clareshilland.unknowndomain.co.uk/ Hit Ctrl+L to login... Username: stackoverflow Password: jquery Although you can see the script in /script.js here is the main code exerpt... $('#menu li a').editable(edit_menu_item, { cssclass: 'editable' }); Here is the callback: function edit_menu_item(value, settings) { $.ajax({ type : "POST", cache : false, url : 'ajax/menu/edit-category.php', dataType: 'json', data : { 'id': this.id, 'value': value }, success : function(data) { if (data.status == 'ok') { alert('ok'); return data.title; } else { alert('n/ok'); return this.revert; } }}); } The JSON code is here: ajax/menu.edit-category.php The edit-in-place is is on the menu which also has a jQuery sortable on it. Single click to edit. Enter to save, and it stores the data but does not update it on the edit-in-place field. Please help stackoverflow, I have been working on this for a mega age. Thanks in advance!

    Read the article

  • Segment register, IP register and memory addressing issue!

    - by Zia ur Rahman
    In the following text I asked two questions and I also described that what I know about these question so that you can understand my thinking. Your precious comments about the below text are required. Below is the Detail of 1ST Question As we know that if we have one mega byte memory then we need 20 bits to address this memory. Another thing is each memory cell has a physical address which is of 20 bits in 1Mb memory. IP register in IAPX88 is of 16 bits. Now my point of view is, we can not access the memory at all by the IP register because the memory need 20 bit address to be addressed but the IP register is of 16 bits. If we have a memory of 64k then IP register can access this memory because this memory needs 16 bits to be addressed. But incase of 1mb memory IP can’t.tell me am i right or not if not why? Suppose physical address of memory is 11000000000000000101 Now how can we access this memory location by 16 bits. Below is the detail of Next Question: My next question is , suppose IP register is pointing to memory location, and the segment register is also pointing to a memory location (start of the segment), the memory is of 1MB, how we can access a memory location by these two 16 bit registers tell me the sequence of steps how the 20 bits addressable memory location is accessed . If your answer is, we take the segment value and we shift it left by 4 bits and then add the IP value into it to get the 20 bits address, then this raises another question that is the address bus (the address bus should be 20 bits wide), the registers both the segment register and the IP register are of 16 bits each , now if address bus is 20 bits wide then this means that the address bus is connected to both these registers. If its not the case then another thing that comes into my mind is that both these registers generate a 20 bit address and there would be a register which can store 20 bits and this register would be connected to both these register and the address bus as well.

    Read the article

  • How to deal with recursive dependencies between static libraries using the binutils linker?

    - by Jack Lloyd
    I'm porting an existing system from Windows to Linux. The build is structured with multiple static libraries. I ran into a linking error where a symbol (defined in libA) could not be found in an object from libB. The linker line looked like g++ test_obj.o -lA -lB -o test The problem of course being that by the time the linker finds it needs the symbol from libA, it has already passed it by, and does not rescan, so it simply errors out even though the symbol is there for the taking. My initial idea was of course to simply swap the link (to -lB -lA) so that libA is scanned afterwards, and any symbols missing from libB that are in libA are picked up. But then I find there is actually a recursive dependency between libA and libB! I'm assuming the Visual C++ linker handles this in some way (does it rescan by default?). Ways of dealing with this I've considered: Use shared objects. Unfortunately this is undesirable from the perspective of requiring PIC compliation (this is performance sensitive code and losing %ebx to hold the GOT would really hurt), and shared objects aren't needed. Build one mega ar of all of the objects, avoiding the problem. Restructure the code to avoid the recursive dependency (which is obviously the Right Thing to do, but I'm trying to do this port with minimal changes). Do you have other ideas to deal with this? Is there some way I can convince the binutils linker to perform rescans of libraries it has already looked at when it is missing a symbol?

    Read the article

  • Programming tips for writing document editors?

    - by Tesserex
    I'm asking this because I'm in the process of writing two such editors for my Mega Man engine, one a tileset editor, and another a level editor. When I say document editor, I mean the superset application type for things like image editors and text editors. All of these share things like toolbars, menu options, and in the case of image editors, and my apps, tool panes. We all know there's tons of advice out there for interface design in these apps, but I'm wondering about programming advice. Specifically, I'm doubting my code designs with the following things: Many menu options toggle various behaviors. What's the proper way to reliably tie the checked state of the option with the status of the behavior? Sometimes it's more complicated, like options being disabled when there's no document loaded. More and more consensus seems to be against using MDI, but how should I control tool panes? For example, I can't figure out how to get the panels to minimize and maximize along with the main window, like Photoshop does. When tool panels are responsible for a particular part of the document, who actually owns that thing? The main window, or the panel class? How do you do communication between the tool panels and the main window? Currently mine is all event based but it seems like there could be a better way. This seems to be a common class of gui application, but I've never seen specific pointers on code design for them. Could you please offer whatever advice or experience you have for writing them?

    Read the article

  • Problem with JavaScript arithmetic

    - by Lynn
    I have a form for my customers to add budget projections. A prominent user wants to be able to show dollar values in either dollars, Kila-dollars or Mega-dollars. I'm trying to achieve this with a group of radio buttons that call the following JavaScript function, but am having problems with rounding that make the results look pretty crummy. Any advice would be much appreciated! Lynn function setDollars(new_mode) { var factor; var myfield; var myval; var cur_mode = document.proj_form.cur_dollars.value; if(cur_mode == new_mode) { return; } else if((cur_mode == 'd')&&(new_mode == 'kd')) { factor = "0.001"; } else if((cur_mode == 'd')&&(new_mode == 'md')) { factor = "0.000001"; } else if((cur_mode == 'kd')&&(new_mode == 'd')) { factor = "1000"; } else if((cur_mode == 'kd')&&(new_mode == 'md')) { factor = "0.001"; } else if((cur_mode == 'md')&&(new_mode == 'kd')) { factor = "1000"; } else if((cur_mode == 'md')&&(new_mode == 'd')) { factor = "1000000"; } document.proj_form.cur_dollars.value = new_mode; var cur_idx = document.proj_form.cur_idx.value; var available_slots = 13 - cur_idx; var td_name; var cell; var new_value; //Adjust dollar values for projections for(i=1;i<13;i++) { var myfield = eval('document.proj_form.proj_'+i); if(myfield.value == '') { myfield.value = 0; } var myval = parseFloat(myfield.value) * parseFloat(factor); myfield.value = myval; if(i < cur_idx) { document.getElementById("actual_"+i).innerHTML = myval; } }

    Read the article

  • How do the size standard libraries compare for different languages

    - by Roman A. Taycher
    Someone was recently raving about how great jQuery was and how it made javascript into a pleasure and also how the whole source code was so small(and one file). I looked it up on www.ohloh.net/ and it said it was about 30,000 lines of javascript, when I tired curl piped to wc it said about 5000 lines(strange discrepancy that, maybe test suites, ect?). I thought well it isn't that strange since javascript from what I've heard has a lot of fun dynamic tricks, so you can probably get away with a small library. But then I thought what about other high level languages, the ones with large standard libraries and wondered how big the standard are for python/ruby/haskell/pharo(smalltalk)/*ml/ect. (libraries not vm stuff to the degree its possible to separate it) Anybody know? Any details (comment/blank/code lines , test code lines, lines in language vs lines in ffi/byte-code) are appreciated! edit: ps. since it started this me asking about jQuery as a bonus if you could please list the size of mega frameworks, a megaframewok provides so much that people using an x megaframework in language y might sometimes refer to programming in xy or even x rather then in y (ie. : qt, jQuery, etc.).

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • "Most popular" GROUP BY in LINQ?

    - by tags2k
    Assuming a table of tags like the stackoverflow question tags: TagID (bigint), QuestionID (bigint), Tag (varchar) What is the most efficient way to get the 25 most used tags using LINQ? In SQL, a simple GROUP BY will do: SELECT Tag, COUNT(Tag) FROM Tags GROUP BY Tag I've written some LINQ that works: var groups = from t in DataContext.Tags group t by t.Tag into g select new { Tag = g.Key, Frequency = g.Count() }; return groups.OrderByDescending(g => g.Frequency).Take(25); Like, really? Isn't this mega-verbose? The sad thing is that I'm doing this to save a massive number of queries, as my Tag objects already contain a Frequency property that would otherwise need to check back with the database for every Tag if I actually used the property. So I then parse these anonymous types back into Tag objects: groups.OrderByDescending(g => g.Frequency).Take(25).ToList().ForEach(t => tags.Add(new Tag() { Tag = t.Tag, Frequency = t.Frequency })); I'm a LINQ newbie, and this doesn't seem right. Please show me how it's really done.

    Read the article

  • Problem finding the difference in days between two dates

    - by James
    I have been using a tidy little routine that I found here to calculate the difference in days between two dates in AS3. I am getting some strange results and I am wondering if any of you inter-codal-mega-lords can shed some light? Why is Q1 of 2010 coming up one day short, when in all other cases the routine is performing fine? Many thanks in advance to anyone who can help! function countDays( startDate:Date, endDate:Date ):int { var oneDay:int = 24*60*60*1000; // hours*minutes*seconds*milliseconds var diffDays:int = Math.abs((startDate.getTime() - endDate.getTime())/(oneDay)); return diffDays; } countDays( new Date( 2010, 00, 01 ), new Date( 2011, 00, 01 ) ); // returns 365, which is correct countDays( new Date( 2010, 00, 01 ), new Date( 2010, 03, 01 ) ); // returns 89, which is 1 day short countDays( new Date( 2010, 03, 01 ), new Date( 2010, 06, 01 ) ); // returns 91, which is correct countDays( new Date( 2010, 06, 01 ), new Date( 2010, 09, 01 ) ); // returns 92, which is correct countDays( new Date( 2010, 09, 01 ), new Date( 2011, 00, 01 ) ); // returns 92, which is correct

    Read the article

  • Is there anything on a local network or desktop environment that could effect JScript execution?

    - by uku
    I know this sounds odd. The JS on my project functions perfectly, except when the web site is accessed using computers at one specific company. To make things even more difficult, the JS fails only about 50% of the time when run from that company. The JS failure occurs with FireFox, Chrome, and IE. I have tested this myself using FF and Chrome on a thumb drive. The browsers on my thumb always display my project site perfectly, except when run from a computer on said company's network where they fail at the same rate as the installed browsers. My JS is using jQuery and making some Ajax calls. The Ajax calls are where the failure is occurring. To diagnose the problem I created a logging function for my Ajax calls and recorded success and failure. Over a one month period, there were only a handful of failures (about 1%) from all access points other than this company. Oddly enough, the Ajax calls in the logging function are not failing. There is nothing exotic there - Just Win XP SP3. I have never noticed any other unusual behavior from their network. The company is a division of a mega ISP and is on their corporate network. Any other suggestions for troubleshooting would be welcome.

    Read the article

  • Why many applications close after opening a document or doing a specific actions?

    - by Mohsen Farjami
    I have some encrypted pdf files that have no problem and in my last windows, I could open them easily with Adobe Reader 9.2 and other pdf readers. But now, I can only open non-encrypted pdf files and one encrypted file with Adobe Reader. every time I open almost any encrypted pdf, it closes itself. Also, when I try to search a folder for a keyword with Foxit Reader, once it closed. This is not related to Adobe Reader, because I have the same problem with Word 2007. When I open a document, sometimes it closes instantly and sometimes it closes after a few seconds and sometimes it is stable. My windows is Fresh. I have installed it a few days ago. I have ESET Smart Security 5.2 and I have updated it today. OS: XP Pro SP3, RAM: 3 GB, CPU: 2 GHZ, HDD: 320 GB My installed applications: Adobe AIR Adobe Flash Player 11 ActiveX Adobe Flash Player 11 Plugin Adobe Photoshop CS4 Adobe Reader 9.2 Atheros Wireless LAN Client Adapter Babylon Bluetooth Stack for Windows by Toshiba CCleaner Conexant HD Audio Dell Touchpad ESET Smart Security Farsi (101) Custom Foxit Reader Framing Studio 3.27 Google Chrome Hard Disk Sentinel PRO HDAUDIO Soft Data Fax Modem with SmartCP Intel(R) Graphics Media Accelerator Driver IrfanView (remove only) Java(TM) 6 Update 18 K-Lite Mega Codec Pack 8.8.0 Microsoft .NET Framework 2.0 Service Pack 1 Microsoft .NET Framework 3.0 Service Pack 1 Microsoft .NET Framework 3.5 Microsoft Data Access Components KB870669 Microsoft Office 2007 Primary Interop Assemblies Microsoft Office Enterprise 2007 Microsoft User-Mode Driver Framework Feature Pack 1.0.0 (Pre-Release 5348) Mozilla Firefox 7.0.1 (x86 en-US) Notepad++ Office Tab FreeEdition 8.50 ParsQuran PerfectDisk 12 Professional Registry First Aid RICOH R5C83x/84x Flash Media Controller Driver Ver.3.54.06 Sahar Money Manager 2.5 Stickies 7.1d The KMPlayer (remove only) TurboLaunch 5.1.2 Unlocker 1.9.1 USB Safely Remove 4.2 Virastyar Visual Studio 2005 Tools for Office Second Edition Runtime Winamp Windows Internet Explorer 8 Windows Media Player 11.0.5358.4826 Windows XP Service Pack 3 WinRAR 4.11 (32-bit) WorkPause 1.2 Z Dictionary My startup applications: WorkPause USB Safely Remove TurboLaunch SunJavaUpdateSched Stickies rfagent Persistence ParsQuran Daily Verse ITSecMng IgfxTray HotKeysCmds Hard Disk Sentinel egui disable shift+delete CTFMON.EXE Bluetooth Manager Babylon Client Apoint AdobeCS4ServiceManager Adobe Reader Speed Launcher Adobe ARM What should I do to solve it? If you recommend installing Windows again, what guarantees that it won't happen again?

    Read the article

  • Memory Efficient Windows SOA Server

    - by Antony Reynolds
    Installing a Memory Efficient SOA Suite 11.1.1.6 on Windows Server Well 11.1.1.6 is now available for download so I thought I would build a Windows Server environment to run it.  I will minimize the memory footprint of the installation by putting all functionality into the Admin Server of the SOA Suite domain. Required Software 64-bit JDK SOA Suite If you want 64-bit then choose “Generic” rather than “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” This has links to all the required software. If you choose “Generic” then the Repository Creation Utility link does not show, you still need this so change the platform to “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” to get the software. Similarly if you need a database then you need to change the platform to get the link to XE for Windows or Linux. If possible I recommend installing a 64-bit JDK as this allows you to assign more memory to individual JVMs. Windows XE will work, but it is better if you can use a full Oracle database because of the limitations on XE that sometimes cause it to run out of space with large or multiple SOA deployments. Installation Steps The following flow chart outlines the steps required in installing and configuring SOA Suite. The steps in the diagram are explained below. 64-bit? Is a 64-bit installation required?  The Windows & Linux installers will install 32-bit versions of the Sun JDK and JRockit.  A separate JDK must be installed for 64-bit. Install 64-bit JDK The 64-bit JDK can be either Hotspot or JRockit.  You can choose either JDK 1.7 or 1.6. Install WebLogic If you are using 64-bit then install WebLogic using “java –jar wls1036_generic.jar”.  Make sure you include Coherence in the installation, the easiest way to do this is to accept the “Typical” installation. SOA Suite Required? If you are not installing SOA Suite then you can jump straight ahead and create a WebLogic domain. Install SOA Suite Run the SOA Suite installer and point it at the existing Middleware Home created for WebLogic.  Note to run the SOA installer on Windows the user must have admin privileges.  I also found that on Windows Server 2008R2 I had to start the installer from a command prompt with administrative privileges, granting it privileges when it ran caused it to ignore the jreLoc parameter. Database Available? Do you have access to a database into which you can install the SOA schema.  SOA Suite requires access to an Oracle database (it is supported on other databases but I would always use an oracle database). Install Database I use an 11gR2 Oracle database to avoid XE limitations.  Make sure that you set the database character set to be unicode (AL32UTF8).  I also disabled the new security settings because they get in the way for a developer database.  Don’t forget to check that number of processes is at least 150 and number of sessions is not set, or is set to at least 200 (in the DB init parameters). Run RCU The SOA Suite database schemas are created by running the Repository Creation Utility.  Install the “SOA and BPM Infrastructure” component to support SOA Suite.  If you keep the schema prefix as “DEV” then the config wizard is easier to complete. Run Config Wizard The Config wizard creates the domain which hosts the WebLogic server instances.  To get a minimum footprint SOA installation choose the “Oracle Enterprise Manager” and “Oracle SOA Suite for developers” products.  All other required products will be automatically selected. The “for developers” installs target the appropriate components at the AdminServer rather than creating a separate managed server to house them.  This reduces the number of JVMs required to run the system and hence the amount of memory required.  This is not suitable for anything other than a developer environment as it mixes the admin and runtime functions together in a single server.  It also takes a long time to load all the required modules, making start up a slow process. If it exists I would recommend running the config wizard found in the “oracle_common/common/bin” directory under the middleware home.  This should have access to all the templates, including SOA. If you also want to run BAM in the same JVM as everything else then you need to “Select Optional Configuration” for “Managed Servers, Clusters and Machines”. To target BAM at the AdminServer delete the “bam_server1” managed server that is created by default.  This will result in BAM being targeted at the AdminServer. Installation Issues I had a few problems when I came to test everything in my mega-JVM. Following applications were not targeted and so I needed to target them at the AdminServer: b2bui composer Healthcare UI FMW Welcome Page Application (11.1.0.0.0) How Memory Efficient is It? On a Windows 2008R2 Server running under VirtualBox I was able to bring up both the 11gR2 database and SOA/BPM/BAM in 3G memory.  I allocated a minimum 512M to the PermGen and a minimum of 1.5G for the heap.  The setting from setSOADomainEnv are shown below: set DEFAULT_MEM_ARGS=-Xms1536m -Xmx2048m set PORT_MEM_ARGS=-Xms1536m -Xmx2048m set DEFAULT_MEM_ARGS=%DEFAULT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m set PORT_MEM_ARGS=%PORT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m I arrived at these numbers by monitoring JVM memory usage in JConsole. Task Manager showed total system memory usage at 2.9G – just below the 3G I allocated to the VM. Performance is not stellar but it runs and I could run JDeveloper alongside it on my 8G laptop, so in that sense it was a result!

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Namespaces are obsolete

    - by Bertrand Le Roy
    To those of us who have been around for a while, namespaces have been part of the landscape. One could even say that they have been defining the large-scale features of the landscape in question. However, something happened fairly recently that I think makes this venerable structure obsolete. Before I explain this development and why it’s a superior concept to namespaces, let me recapitulate what namespaces are and why they’ve been so good to us over the years… Namespaces are used for a few different things: Scope: a namespace delimits the portion of code where a name (for a class, sub-namespace, etc.) has the specified meaning. Namespaces are usually the highest-level scoping structures in a software package. Collision prevention: name collisions are a universal problem. Some systems, such as jQuery, wave it away, but the problem remains. Namespaces provide a reasonable approach to global uniqueness (and in some implementations such as XML, enforce it). In .NET, there are ways to relocate a namespace to avoid those rare collision cases. Hierarchy: programmers like neat little boxes, and especially boxes within boxes within boxes. For some reason. Regular human beings on the other hand, tend to think linearly, which is why the Windows explorer for example has tried in a few different ways to flatten the file system hierarchy for the user. 1 is clearly useful because we need to protect our code from bleeding effects from the rest of the application (and vice versa). A language with only global constructs may be what some of us started programming on, but it’s not desirable in any way today. 2 may not be always reasonably worth the trouble (jQuery is doing fine with its global plug-in namespace), but we still need it in many cases. One should note however that globally unique names are not the only possible implementation. In fact, they are a rather extreme solution. What we really care about is collision prevention within our application. What happens outside is irrelevant. 3 is, more than anything, an aesthetical choice. A common convention has been to encode the whole pedigree of the code into the namespace. Come to think about it, we never think we need to import “Microsoft.SqlServer.Management.Smo.Agent” and that would be very hard to remember. What we want to do is bring nHibernate into our app. And this is precisely what you’ll do with modern package managers and module loaders. I want to take the specific example of RequireJS, which is commonly used with Node. Here is how you import a module with RequireJS: var http = require("http"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is of course importing a HTTP stack module into the code. There is no noise here. Let’s break this down. Scope (1) is provided by the one scoping mechanism in JavaScript: the closure surrounding the module’s code. Whatever scoping mechanism is provided by the language would be fine here. Collision prevention (2) is very elegantly handled. Whereas relocating is an afterthought, and an exceptional measure with namespaces, it is here on the frontline. You always relocate, using an extremely familiar pattern: variable assignment. We are very much used to managing our local variable names and any possible collision will get solved very easily by picking a different name. Wait a minute, I hear some of you say. This is only taking care of collisions on the client-side, on the left of that assignment. What if I have two libraries with the name “http”? Well, You can better qualify the path to the module, which is what the require parameter really is. As for hierarchical organization, you don’t really want that, do you? RequireJS’ module pattern does elegantly cover the bases that namespaces used to cover, but it also promotes additional good practices. First, it promotes usage of self-contained, single responsibility units of code through the closure-based, stricter scoping mechanism. Namespaces are somewhat more porous, as using/import statements can be used bi-directionally, which leads us to my second point… Sane dependency graphs are easier to achieve and sustain with such a structure. With namespaces, it is easy to construct dependency cycles (that’s bad, mmkay?). With this pattern, the equivalent would be to build mega-components, which are an easier problem to spot than a decay into inter-dependent namespaces, for which you need specialized tools. I really like this pattern very much, and I would like to see more environments implement it. One could argue that dependency injection has some commonalities with this for example. What do you think? This is the half-baked result of some morning shower reflections, and I’d love to read your thoughts about it. What am I missing?

    Read the article

  • 7-Eleven Improves the Digital Guest Experience With 10-Minute Application Provisioning

    - by MichaelM-Oracle
    By Vishal Mehra - Director, Cloud Computing, Oracle Consulting Making the Cloud Journey Matter There’s much more to cloud computing than cutting costs and closing data centers. In fact, cloud computing is fast becoming the engine for innovation and productivity in the digital age. Oracle Consulting Services contributes to our customers’ cloud journey by accelerating application provisioning and rapidly deploying enterprise solutions. By blending flexibility with standardization, our Middleware as a Service (MWaaS) offering is ensuring the success of many cloud initiatives. 10-Minute Application Provisioning Times at 7-Eleven As a case in point, 7-Eleven recently highlighted the scope, scale, and results of a cloud-powered environment. The world’s largest convenience store chain is rolling out a Digital Guest Experience (DGE) program across 8,500 stores in the U.S. and Canada. Everyday, 7-Eleven connects with tens of millions of customers through point-of-sale terminals, web sites, and mobile apps. Promoting customer loyalty, targeting promotions, downloading digital coupons, and accepting digital payments are all part of the roadmap for a comprehensive and rewarding customer experience. And what about the time required for deploying successive versions of this mission-critical solution? Ron Clanton, 7-Eleven's DGE Program Manager, Information Technology reported at Oracle Open World, " We are now able to provision new environments in less than 10 minutes. This includes the complete SOA Suite on Exalogic, and Enterprise Manager managing both the SOA Suite, Exalogic, and our Exadata databases ." OCS understands the complex nature of innovative solutions and has processes and expertise to help clients like 7-Eleven rapidly develop technology that enhances the customer experience with little more than the click of a button. OCS understood that the 7-Eleven roadmap required careful planning, agile development, and a cloud-capable environment to move fast and perform at enterprise scale. Business Agility Today’s business-savvy technology leaders face competing priorities as they confront the digital disruptions of the mobile revolution and next-generation enterprise applications. To support an innovation agenda, IT is required to balance competing priorities between development and operations groups. Standardization and consolidation of computing resources are the keys to success. With our operational and technical expertise promoting business agility, Oracle Consulting's deep Middleware as a Service experience can make a significant difference to our clients by empowering enterprise IT organizations with the computing environment they seek to keep up with the pace of change that digitally driven business units expect. Depending on the needs of the organization, this environment runs within a private, public, or hybrid cloud infrastructure. Through on-demand access to a shared pool of configurable computing resources, IT delivers the standard tools and methods for developing, integrating, deploying, and scaling next-generation applications. Gold profiles of predefined configurations eliminate the version mismatches among databases, application servers, and SOA suite components, delivered both by Oracle and other enterprise ISVs. These computing resources are well defined in business terms, enabling users to select what they need from a service catalog. Striking the Balance between Development and Operations As a result, development groups have the flexibility to choose among a menu of available services with descriptions of standard business functions, service level guarantees, and costs. Faced with the consumerization of enterprise IT, they can deliver the innovative customer experiences that seamlessly integrate with underlying enterprise applications and services. This cloud-powered development and testing environment accelerates release cycles to ensure agile development and rapid deployments. At the same time, the operations group is relying on certified stacks and frameworks, tuned to predefined environments and patterns. Operators can maintain a high level of security, and continue best practices for applications/systems monitoring and management. Moreover, faced with the challenges of delivering on service level agreements (SLAs) with the business units, operators can ensure performance, scalability, and reliability of the infrastructure. The elasticity of a cloud-computing environment – the ability to rapidly add virtual machines and storage in response to computing demands -- makes a difference for hardware utilization and efficiency. Contending with Continuous Change What does it take to succeed on the promise of the cloud? As the engine for innovation and productivity in the digital age, IT must face not only the technical transformations but also the organizational challenges of the cloud. Standardizing key technologies, resources, and services through cloud computing is only one part of the cloud journey. Managing relationships among multiple department and projects over time – developing the management, governance, and monitoring capabilities within IT – is an often unmentioned but all too important second part. In fact, IT must have the organizational agility to contend with continuous change. This is where a skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. With a lifecycle services approach to delivering innovative business solutions, Oracle Consulting Services has expertise and a portfolio of services to help enterprise customers succeed on their cloud journeys as well as other converging mega trends .

    Read the article

  • 4 Ways Your Brand Can Jump From the Edge of Space

    - by Mike Stiles
    Can your brand’s social media content captivate the world and make it hold its collective breath? Can you put something on the screen that’s so compelling that your audience can’t look away? Will they want to make sure their friends see it so they can talk about it? If not, you’re probably not with Red Bull. I was impressed with Red Bull’s approach to social content even before Felix Baumgartner’s stunning skydive from the edge of space. And then they did this. According to Visible Measures, videos of the jump scored 50 million views in 4 days. 1,700 clips were generated from both official and organic sources. The live stream was the most watched YouTube Stream of all time (8 million concurrent viewers). The 2nd most watched live stream was…Felix’ first attempt Oct. 9. Are you ready to compete with that? I ask that question because some brands are still out there tying themselves up in knots about whether or not they should tweet. The public’s time and attention are scarce commodities, commodities they value greatly. The competition amongst brands for that time and attention is intense and going up like Felix’s capsule. If you still view your press releases as “content,” you won’t even be counted as being among the competition. Here are 5 lessons learned from Red Bull’s big leap: 1. They have a total understanding of their target market and audience. Not only do they have an understanding of it, they do something about it. They act on it. They fill the majority of their thoughts with what the audience wants. They hunger for wild applause from that audience. They want to do things that embrace the audience’s lifestyle and immerse in it so the target will identify the brand as “one of them.” Takeaway: BE your target market. 2. They deliver content that strikes the audience right where they emotionally live. If you want your content to have impact, you have to make your audience’s heart race, or make them tear up, or make them laugh. Label them “data points” all you want, but humans are emotional creatures. No message connects that’s not carried in on an emotion. Takeaway: You’re on the inside. If your content doesn’t make you say “wow,” it’s unlikely it will register with fans. 3. They put aside old school marketing and don’t let their content be degraded into a commercial. Their execs seem to understand the value in keeping a lid on the hard sell. So many brands just can’t bring themselves to disconnect advertising and social content. The result is, otherwise decent content gets contaminated with a desperation the viewer can smell a mile away. Think the Baumgartner skydive didn’t do Red Bull any good since he wasn’t drinking one on the way down while singing a jingle? Analysis company Taykey discovered that at the peak of the skydive buzz, about 1% of all online conversation was about the jump. Mentions of Red Bull constituted 1/3 of 1% of all Internet activity. Views of other Red Bull videos also shot up. Takeaway: Chill out with the ads. Your brand will get full credit for entertaining/informing fans in a relevant way, provided you do it. 4. They don’t hesitate to ask, “What can we do next”? Most corporate cultures are a virtual training facility for “we can’t do that.” Few are encouraged to innovate or think big, if think at all. Thinking big involves faith, and work. It means freedom and letting employees run a little wild with their ideas. There will always be the opportunity to let fear of everything that moves creep in and kill grand visions dead in their tracks. Experimenting must be allowed. Failure must be allowed. Red Bull didn’t think big. They thought mega. They tried to outdo themselves. Felix could have gone ahead and jumped halfway up, thinking, “This is still relatively high up. Good enough.” But that wouldn’t have left us breathless. Takeaway: Go for it. Jump. In putting up social properties and gathering fans of your brand, you’ve basically invited people to a party. A good host doesn’t just set out warm beer and stale chips because that’s inexpensive and easy. Be on the lookout for ways to make your guests walk away saying, “That was epic.”

    Read the article

  • dojo layout tutorial dor version 1.7 doesn't work for 1.7.2

    - by Sheena
    This is sortof a continuation to dojo1.7 layout acting screwy. So I made some working widgets and tested them out, i then tried altering my work using the tutorial at http://dojotoolkit.org/documentation/tutorials/1.7/dijit_layout/ to make the layout nice. After failing at that in many interesting ways (thus my last question) I started on a new path. My plan is now to implement the layout tutorial example and then stick in my widgets. For some reason even following the tutorial wont work... everything loads then disappears and I'm left with a blank browser window. Any ideas? It just struck me that it could be browser compatibility issues, I'm working on Firefox 13.0.1. As far as I know Dojo is supposed to be compatible with this... anyway, have some code: HTML: <body class="claro"> <div id="appLayout" class="demoLayout" data-dojo-type="dijit.layout.BorderContainer" data-dojo-props="design: 'headline'"> <div class="centerPanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'center'"> <div> <h4>Group 1 Content</h4> <p>stuff</p> </div> <div> <h4>Group 2 Content</h4> </div> <div> <h4>Group 3 Content</h4> </div> </div> <div class="edgePanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'top'"> Header content (top) </div> <div id="leftCol" class="edgePanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'left', splitter: true"> Sidebar content (left) </div> </div> </body> Dojo Configuration: var dojoConfig = { baseUrl: "${request.static_url('mega:static/js')}", //this is in a mako template tlmSiblingOfDojo: false, packages: [ { name: "dojo", location: "libs/dojo" }, { name: "dijit", location: "libs/dijit" }, { name: "dojox", location: "libs/dojox" }, ], parseOnLoad: true, has: { "dojo-firebug": true, "dojo-debug-messages": true }, async: true }; other js stuff: require(["dijit/layout/BorderContainer", "dijit/layout/TabContainer", "dijit/layout/ContentPane", "dojo/parser"]); css: html, body { height: 100%; margin: 0; overflow: hidden; padding: 0; } #appLayout { height: 100%; } #leftCol { width: 14em; }

    Read the article

  • Rspec and Rails 3 - Problem Validating Nested Attribute Collection Size

    - by MunkiPhD
    When I create my Rspec tests, I keep getting a validation of false as opposed to true for the following tests. I've tried everything and the following is the measly code that I have now - so if it's waaaaay wrong, that's why. class Master < ActiveRecord::Base attr_accessible :name, :specific_size # Associations ---------------------- has_many :line_items accepts_nested_attributes_for :line_items, :allow_destroy => true, :reject_if => lambda { |a| a[:item_id].blank? } # Validations ----------------------- validates :name, :presence => true, :length => {:minimum => 3, :maximum => 30} validates :specific_size, :presence => true, :length => {:minimum => 4, :maximum => 30} validate :verify_items_count def verify_items_count if self.line_items.size < 2 errors.add(:base, "Not enough items to create a master") end end end And here it the items model: class LineItem < ActiveRecord::Base attr_accessible :specific_size, :other_item_type_id # Validations -------------------- validates :other_item_type_id, :presence => true validates :master_id, :presence => true validates :specific_size, :presence => true # Associations --------------------- belongs_to :other_item_type belongs_to :master end The RSpec Tests: before(:each) do @master_lines = [] @master_lines << LineItem.new(:other_item_type_id => 1, :master_id => 2, :specific_size => 1) @master_lines << LineItem.new(:other_item_type_id => 2, :master_id => 2, :specific_size => 1) @attr = {:name => "Some Master", :specific_size => "1 giga"} end it "should create a new instance given a valid name and specific size" do @master = Master.create(@attr) line_item_one = @master.line_items.build(:other_item_type_id => 1, :specific_size => 1) line_item_two = @master.line_items.build(:other_item_type_id => 2, :specific_size => 2) @master.line_items.size === 2 @master.should be_valid end it "should have at least two items to be valid" do master = Master.new(:name => "test name", :specific_size => "1 mega") master_item_one = LineItem.new(:other_item_type_id => 1, :specific_size => 2) master_item_two = LineItem.new(:other_item_type_id => 2, :specific_size => 1) master.line_items << master_item_one master.should_not be_valid master.line_items << master_item_two master.line_items.size.should === 2 master.should be_valid end I'm very new to Rspec and Rails - and I've been failing at this for the past couple of hours. Thanks for any help in advance.

    Read the article

  • Wear and tear on server hard drive from filesystem polling by PHP script

    - by jackie
    So I'm working on a discussion platform, and various clients will visit http://host/thread.php, which will render the discussion thread to date in addition to a form to submit a new post. When a new post is submitted, I would like all of the other clients with browser windows open to have it appear in near-real-time. One of the constraints of my script is that it may not use a DBMS and it must stay in the filesystem. Additionally, I can't use any PECL/PEAR extensions like inotify or anything like that for IPC. The flow will look like this: Client A requests thread.php and the thread is so far empty, but nonetheless it opens a Server-Side Event at eventPusher.php. Client B does the same. Client A fills out a post in the form and and submits (POSTs) it to subHandler.php. ??? (subHandler stores the new submission into the main thread storefile which gets read from when a fresh, new client requests thread.php, in addition to somehow signalling to the continually-running eventPusher event-source that a new comment was posted and that it should echo the event-json to the client. How, exactly, it will send this signal I'm yet unsure of, but there are a few options that I've thought of -- this is the crux of the question, so see below for more clarification) eventPusher.php happily pushes the new event to the client and it shows up soon after it was originally submitted on all clients who have the page open's screens. Now for the #4 missing-link mystery-step, I see a few problems. I mean, either way, eventPusher is gonna be doing a while loop of some sort -- it's gonna be polling something, I think that much is clear. (If that's a bad assumption please do let me know.) Now, the simplest way would be subHandler gets invoked on the form submission, writes it to the main store in addition to newComments.xml, then exits without doing anything else. Then eventPusher checks in newComments.xml every X seconds (by the way, what would be a reasonable time interval here?) and if it finds something then it emits an event to the client. Now, my fear with this is that the server's hard drive will have to constantly start spinning up. Maybe this isn't the case, perhaps it would just get cached in RAM and the linux kernel would take care of this transparently such that filesystem access doesn't actually engage the device because the kernel knows that that particular file hasn't changed since last read. * idea #2: I have no idea how to go about this, but perhaps there is a variable scope that gets stored in general RAM on the system which can be read by any process. Like if we mega-exported a bash variable so that $new_post is normally false but it gets toggled to true by subHandler, and then back to flase once it's pushed to the client. I doubt there's such a variable scope in PHP directly, but I struggle with the concept of variable scope, I just can't seem to understand it no matter what I read on it. * idea #3: eventPusher queries ps in its whileloop for another instance of itself. If there's not another eventPusher active then it's highly unlikely that new comments will be getting submitted. It's okay if this only works =90% of the time, it doesn't need to be completely foolproof. * idea #4: eventPusher queries DMESG to see if that file's been written to recently. So to sum everything up, I need to have inter-php-script-communication in near-real-time that will work on a standard mod_php shared hosting setup without any elevated privileges, PHP addon modules, or other system adjustments that can't be done from the PHP script itself at runtime. With*out* spinning up the drive more than a few times. No SQL servers either. Apologies if my english isn't the best, I'm still trying to improve on it.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >