Search Results

Search found 18926 results on 758 pages for 'systems programming'.

Page 614/758 | < Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >

  • SpaceX’s Falcon 9 Launch Success And Reusable Rockets Test Partially Successful

    - by Gopinath
    Elon Musk’s SpaceX is closing on the dream of developing reusable rockets and likely in an year or two space launch rockets will be reusable just like flights, ships and cars. Today SpaceX launched an upgraded Falcon 9 rocket in to space to deliver satellites as well as to test their reusable rocket launching technology. All on board satellites were released on to the orbit and the first stage of rocket partially succeeded in returning back to Earth. This is a huge leap in space technology.   Couple of years ago reusable rockets were considered as impossible. NASA, Russian Space Agency, China, India or for that matter any other space agency never even attempted to build reusable rockets. But SpaceX’s revolutionary technology partially succeeded in doing the impossible! Elon Musk founded SpaceX with the goal of building reusable rockets and transporting humans to & from other planets like Mars. He says If one can figure out how to effectively reuse rockets just like airplanes, the cost of access to space will be reduced by as much as a factor of a hundred.  A fully reusable vehicle has never been done before. That really is the fundamental breakthrough needed to revolutionize access to space. Normally the first stage of a rocket falls back to Earth after burning out and is destroyed. But today SpaceX reignited first stage rocket after its separation and attempted to descend smoothly on to ocean’s surface. Though it did not fully succeed, the test was partially successful and SpaceX was able to recovers portions of first stage. Rocket booster relit twice (supersonic retro & landing), but spun up due to aero torque, so fuel centrifuged & we flamed out — Elon Musk (@elonmusk) September 29, 2013 With the partial success of recovering first stage, SpaceX gathered huge amount of information and experience it can use to improve Falcon 9 and build a fully reusable rocket. In post launch press conference Musk said if things go "super well", could refly a Falcon 9 1st stage by the end of next year. Falcon 9 Launch Video Next reusable first tests delayed by at least two launches SpaceX has a busy schedule for next several months with more than 50 missions scheduled using the new Falcon 9 rocket. Ten of those missions are to fly cargo to the International Space Shuttle for NASA.  SpaceX announced that they will not attempt to recover the first stage of Falcon 9 in next two missions. The next test will be conducted on  the fourth mission of Falcon 9 which is planned to carry cargo to Internation Space Station sometime next year. This will give time required for SpaceX to analyze the information gathered from today’s mission and improve first stage reentry systems. More reading Here are few interesting sources to read more about today’s SpaceX launch SpaceX post mission press conference details and discussion on Reddit Giant Leaps for Space Firms Orbital, SpaceX Hacker News community discussion on SpaceX launch SpaceX Launches Next-Generation Private Falcon 9 Rocket on Big Test Flight

    Read the article

  • Microsoft Technical Computing

    - by Daniel Moth
    In the past I have described the team I belong to here at Microsoft (Parallel Computing Platform) in terms of contributing to Visual Studio and related products, e.g. .NET Framework. To be more precise, our team is part of the Technical Computing group, which is still part of the Developer Division. This was officially announced externally earlier this month in an exec email (from Bob Muglia, the president of STB, to which DevDiv belongs). Here is an extract: "… As we build the Technical Computing initiative, we will invest in three core areas: 1. Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly. 2. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today’s modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud. 3. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. …" Our Parallel Computing Platform team is directly responsible for item #2, and we work very closely with the teams delivering items #1 and #3. At the same time as the exec email, our marketing team unveiled a website with interviews that I invite you to check out: Modeling the World. Comments about this post welcome at the original blog.

    Read the article

  • The 2010 Life Insurance Conference - Washington, DC

    - by [email protected]
    How ironic to be in Washington, DC on April 15 - TAX DAY! Fortunately, I avoided IRS offices and attended the much more enjoyable 2010 Life Insurance Conference, presented by LIMRA, LOMA SOA and ACLI. This year's conference offered a variety of tracks focused on the Life Industry including Distribution/Marketing Marketing, Administration, Actuarial/Product Development, Regulatory, Reinsurance and Strategic Management. President and CEO of the ACLI, Frank Keating, opened the event by moderating a session titled "Executive Viewpoint on new Opportunities." Guest speakers included Ted Mathas, President and CEO of NY Life, and John Walters, President and CEO of Hartford Life. Both speakers were insightful as they shared the challenges and opportunities each company faces and the key role life insurance companies play in our society and the global economy. There were several key themes that were reiterated in multiple sessions throughout the conference - the economy is on the rebound, optimism is growing, consumer spending is up and an uptick in employment is likely to follow. The threat of a double dip recession has seemed to passed. Good news for our industry, and welcomed by all in attendance. Of special interest to me, given my background, was some research shared by both The Nolan Group and Novarica in separate sessions. Both firms indicate that policy administration upgrades/replacement projects remain a top priority in 2010. Carriers continue to invest in modern technology. Modern ultra-configurable systems enable carriers to switch from a waterfall to an agile project methodology, which often entails a "culture change" within an organization. Other themes heard throughout the two-day event: Virtually all sessions focused on People, Process and Technology! Product innovation, agility and speed to market are as important as ever. Social Networks and Twitter are becoming more popular ways of communicating with both field and dispersed staff. Several sessions focused on the application, new business and underwriting process. Companies continue looking for ways to increase market agility, accelerate speed to market, address cost issues and improve service levels across the process. They recognize the need to ease the way to do business with both producers and consumers. Author and economic futurist Jeff Thredgold presented an entertaining, informative and humorous general session on Wednesday afternoon that focused on the US and global economies, financial markets and retirement outlook. Thredgold did not disappoint anyone with his message! The Thursday morning general session was keynoted by Therese Vaughan (CEO - NAIC) and Thomas Crawford (President of C2 Group). Both speakers gave a poignant view of the recent financial crisis and discussed "Putting the Pieces Back Together." Therese spoke of the recent financial turmoil and likely changes to regulations to the financial services sector. Tom's topics focused on economic recovery and the political environment in Washington, and how that impacts our industry. Next year's event will be April 11-13, 2011 in Las Vegas. Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • The Whole Enchilada — Fusion Supply Chain in the Cloud

    - by Kathryn Perry
    A guest post by Tyra Crockett, Senior Manager at Oracle No other vendor can offer everything in the cloud the way Oracle can. You can get HR from Workday and CRM from Salesforce, but you can get the whole enchilada—HCM, CRM and ERP—all from Oracle on one platform. If you’re thinking about using Oracle's Cloud Services to implement the newest Oracle Fusion Supply Chain applications, this post is for you. Point #1: The Oracle Cloud Applications Services portfolio includes ERP cloud services which are flexible and can adapt to fill your supply chain needs. For example, you might be opening a small distribution facility in California, but don’t have the time or IT resources to warrant a full scale supply chain implementation. You can use Oracle’s Cloud to implement the Oracle Fusion Supply Chain applications you need without an increase in IT staff or hardware. Then as your business grows, you can add more features and applications to your cloud.   Point #2: Whether you’re implementing a slice of the Fusion Procurement pie, or the entire ERP portfolio, you want to be up and running fast with low upfront costs and investment risks. That’s where you can trust a world-class technology organization like Oracle. Your SaaS subscription-based deployment model will take away the headaches associated with determining your software costs. You also will be able to eliminate expensive customizations and configure your deployment as you like, saving you time and money during the initial stages and upon upgrade. Point #3: Another great benefit of operating your Oracle Fusion Supply Chain in the cloud is the opportunity to standardize your processes across your entire supply chain. You can institute processes in San Francisco and be confident they will be followed in Mexico City and Hong Kong. Point #4: If data security is a concern – and it is for most of us – Oracle-managed cloud services give you the comfort of knowing that your data will always be there when you need it. You will not have to manage the IT services associated with patching and upgrade. They will be taken care of automatically. This enables you to focus on what you do best: managing your business. Point #5: Cloud services aren’t an either/or proposition. You might have very good business reasons for choosing a hybrid model -- running some applications in the cloud and others on premise. That allows you to leverage your own IT department, when and where you need to, and shift focus when necessary. I urge you to take a hard look at the Oracle Fusion Supply Chain applications running in the cloud. These solutions running alongside your existing legacy systems can solve your toughest business challenges as you move forward in the 21st century.

    Read the article

  • Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client]

    - by Asian Angel
    Do you have a Windows computer that needs to be synced with the Ubuntu systems connected to your Ubuntu One account? Not a problem. Just grab a copy of the Ubuntu One Desktop Client and in just a few minutes your Windows system will be feeling the Ubuntu love. Once you get the desktop client installed you will see a new System Tray Icon waiting for you. Access the Context Menu and select Add this computer to start the syncing process. Enter your account details into the login window that appears and click Connect to Ubuntu One. Go back to the System Tray Icon, access the Context Menu, and select Synchronize Now. You can monitor the progress as small desktop notification messages keep you updated during the synchronizing process. The newly synchronized files will be placed in an Ubuntu One Folder under Documents/My Documents. Here is a quick peek at the Preferences Window. The only odd thing (bug) that we noticed with the whole setup was “Disconnected” being displayed even though our system was freshly synchronized and logged in. Note: Works on Windows XP (with SP3 & Windows Installer 4.5), Vista, and Windows 7. You will need to have the .NET 4 Framework installed (links for both installer types provided below). Need to access your Ubuntu One account directly through your browser? Then see our article on Accessing and Managing Your Ubuntu One Account in Chrome and Iron. Links Download the Ubuntu One Desktop Client [Ubuntu One Wiki] *Click on the (https://one.ubuntu.com/windows/beta) link to start the download. Microsoft .NET Framework 4 (Standalone Installer) [Microsoft] Microsoft .NET Framework 4 (Web Installer) [Microsoft] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper] N64oid Brings N64 Emulation to Android Devices Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    Read the article

  • Multicast delegates in c#

    - by Jalpesh P. Vadgama
    In yesterday’s post We learn about Delegates and how we can use delegates in C#. In today’s blog post we are going to learn about Multicast delegates. What is Multicast Delegates? As we all know we can assign methods as object to delegate and later on we can call that method with the help delegates. We can also assign more then methods to delegates that is called Multicast delegates. It’s provide functionality to execute more then method at a time. It’s maintain delegates as invocation list (linked list). Let’s understands that via a example. We are going to use yesterday’s example and then we will extend that code multicast delegates. Following code I have written to demonstrate the multicast delegates. using System; namespace Delegates { class Program { public delegate void CalculateNumber(int a, int b); static void Main(string[] args) { int a = 5; int b = 5; CalculateNumber addNumber = new CalculateNumber(AddNumber); CalculateNumber multiplyNumber = new CalculateNumber(MultiplyNumber); CalculateNumber multiCast = (CalculateNumber)Delegate.Combine (addNumber, multiplyNumber); multiCast.Invoke(a,b); Console.ReadLine(); } public static void AddNumber(int a, int b) { Console.WriteLine("Adding Number"); Console.WriteLine(5 + 6); } public static void MultiplyNumber(int a, int b) { Console.WriteLine("Multiply Number"); Console.WriteLine(5 + 6); } } } As you can see in the above code I have created two method one for adding two numbers and another for multiply two number. After that I have created two same CalculateNumber delegates addNumber and multiplyNumber then I have create a multicast delegates multiCast with combining two delegates. Now I want to call this both method so I have used Invoke method to call this delegates. As now our code is let’s run the application. Following is a output as expected. As you can we can execute multiple methods with multicast delegates the only thing you need to take care is that we need to type for both delegates. That’s it. Hope you like it. Stay tuned for more.. Till then happy programming.

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Sesame update du jour: SL 4, OOB, Azure, and proxy support

    - by Fabrice Marguerie
    I've just published a new version of Sesame Data Browser. Here's what's new this time: Upgraded to Silverlight 4 Can run out-of-browser (OOB), with elevated permissions. This gives you an icon on your desktop and enables new scenarios. Note: The application is unsigned for the moment. Support for Windows Azure authentication Support for SQL Azure authentication If you are behind a proxy that requires authentication, just give Sesame a new try after clicking on "If you are behind a proxy that requires authentication, please click here" An icon and a button for closing connections are now displayed on connection tabsSome less visible improvements Here is the connection view with anonymous access: If you want to access Windows Azure tables as OData, all you have to do is use your table storage endpoint as the URL, and provide your access key: A Windows Azure table storage address looks like this: http://<your account>.table.core.windows.net/ If you want to browse your SQL Azure databases with Sesame, you have to enable OData support for them at https://www.sqlazurelabs.com/ConfigOData.aspx. I won't show how it works because it's already been done in several places over the Web. Here are pointers: OData.org: Got SQL Azure? Then you've got OData OakLeaf Systems: Enabling and Using the OData Protocol with SQL Azure Patrick Verbruggen: Creating an OData feed for your Azure databases Shawn Wildermuth: SQL Azure's OData Support Jack Greenfield: How to Use OData for SQL Azure with AppFabric Access Control You can choose to enable anonymous access or not. When you don't enable anonymous access, you have to provide an Issuer name and a Secret key, and optionally an Security Token Service (STS) endpoint: Excerpt from Jack Greenfield's blog: To enable OData access to the currently selected database, check the box labeled "Enable OData". When OData access is enabled, database user mapping information is displayed at the bottom of the form.Use the drop down list labeled "Anonymous Access User" to select an anonymous access user. If an anonymous access user is selected, then all queries against the database presented without credentials will execute by impersonating that user. You can access the database as the anonymous user by clicking on the link provided at the bottom of the page. If no anonymous access user is selected, then the OData Service will not allow anonymous access to the database.Click the link labeled "Add User" to add a user for authenticated access. In the pop up panel, select the user from the drop down list. Leave the issuer name empty for simple authentication, or provide the name of a trusted Security Token Service (STS) for federated authentication. For example, to federate with another ACS based STS, provide the base URI for the STS endpoint displayed by the Windows Azure AppFabric Portal for the STS.Click the "OK" button to complete the configuration process and dismiss the pop up panel. When one or more authenticated access users are added, the OData Service will impersonate them when appropriate credentials are presented. You can designate as many authenticated access users as you like. The OData Service will decide which one to impersonate for each query by inspecting the credentials presented with the query.Next time I'll give an overview of how Sesame Data Browser is built.In the meantime, happy data browsing!

    Read the article

  • TSAM 11gR1

    - by todd.little
    The Tuxedo System and Application Monitor (TSAM) 11gR1 release provides powerful new application monitoring capabilities, as well as significant improvements in ease of use. The first thing users will notice is the completely redesigned user interface in the TSAM console. Based on Oracle ADF, the console is much easier to navigate, provides a Web 2.0 style interface with dynamically updating panels, and a look and feel familiar to those that have used Oracle Enterprise Manager. Monitoring data can be viewed in both tabular and graphical form and exported to Excel for further analysis. A number of new metrics are collected and displayed in this release. Call path monitoring now displays CPU time, message size, total transport time, and client address giving even more end-to-end information about a specific Tuxedo request. As well the call path display has been completely revamped to make it much easier to see the branches of the call path. The call pattern display now provides statistics on successful vs failed calls, system and application failures, and end-to-end average elapsed time. Service monitoring now displays minimum and maximum message size, CPU usage, and client address. System server monitoring now includes monitoring the SALT gateway servers to provide detailed performance metrics about those servers. Perhaps the most significant new feature is the consolidation of alert definitions and policy management. In previous versions of TSAM, some alerts were defined and checked on the monitored systems while others were defined and checked in the console. Policy management could be performed on both the monitored node via environment variable or command, as well as from the console. Now all alert definitions and policy definitions are only made using the console. For alerts this means that regardless of where the alert is evaluated it is defined in one and only one place. Thus the plug-in alert mechanism of previous releases can now be managed using the TSAM console, making SLA alert definition much easier and cleaner. Finally there is support in TSAM for monitoring rehosted mainframe applications. The newly announced Oracle Tuxedo Application Runtime for CICS and Batch can be monitored in the TSAM console using traditional mainframe views of the application such as regions. Look for a future blog entry with more details on this as well as some entries providing a glimpse of the console. TSAM gives users a single point for monitoring the performance of all of their Tuxedo applications.

    Read the article

  • How can I troubleshoot flash player/hardware conflict?

    - by sparthikas
    OBJECTIVE: Have a web browser on my Ubuntu install that can play youtube and hulu videos. Also would like to understand problem so that I can fix it again if I change software. Workarounds welcome, technical understanding and solution preferable. SYMPTOMS: Flash does not run - cannot make the right-click menu appear, an empty box is where video should be, changes to black box when hovering over other links. The "The Adobe Flash plugin has crashed" message does not appear with its sad lego face. cannot activate proprietary graphics driver - causes system to reboot to a prompt. SOLUTIONS TRIED: Replaced OS (tried slackware 13.37, fedora 17, linuxmint 13 maya, gentoo, lubuntu, and even winXP. lubuntu confirmed to work, don't remember how much tweaking, if any, this required. Slack, fedora, mint, and gentoo all failed to run flash just like ubuntu) many reinstalls of flash player via different methods, including cleaning up old installs first, also tried gnash and lightspark. replaced graphics card (replaced HIS IceQ Radeon HD 4670 AGP with older GeForce 5700 LE no change in problem) flash does successfully work on winXP installation with Catalyst AGP hotfix driver applied, however I consider windows wholly unacceptable for web browsing due to lack of security. Lubuntu install also works, however I do not want to be tied down to just using Lubuntu on this computer. SYSINFO: Have latest versions of Ubuntu, Firefox, and Flash on fresh Ubuntu install. Using Gigabyte 7s748 motherboard with Athlon XP 2800+ and 3 GB of RAM with Radeon HD 4670 AGP card, also a Dell soundblaster live series sound card (due to malfunction of onboard sound on motherboard) Wired internet connection, Maxtor 6Y120L3 HDD, Sony DVD RW AW-Q170A, Dell M993s monitor. NOTES: I do not know if the graphics driver issue and the flash troubles are linked. Substitution of older graphics card having same flash troubles seems to suggest they aren't. My troubleshooting method is rather reductionist, consisting mainly of "replace things until you find out what was causing the error by process of elimination" only it seems that this must be a conflict which arises when software decides how to configure itself on my hardware. That is, I know the hardware can run Flash, and I know that on other systems the same software can too, but somehow the combination fails. Consequently I feel out of my depth. I will keep trying things off and on, but I have spent probably about 30 man-hours in the last 4 months working on this problem with no joy other than the lubuntu workaround. Any help will be appreciated, I will be checking back and posting updates. Any pertinent questions regarding me or my computer will be answered, outputs from config files can be accessed and posted (IDK which ones or what parts to post).

    Read the article

  • Book review: Microsoft System Center Enterprise Suite Unleashed

    - by BuckWoody
    I know, I know – what’s a database guy doing reading a book on System Center? Well, I need it from time to time. System Center is actually a collection of about 7 different products that you can use to manage and monitor your software and hardware, from drive space through Microsoft Office, UNIX systems, and yes, SQL Server. It’s that last part I care about the most, and so I’ve dealt with Data Protection Manager and System Center Operations Manager (I call it SCOM) in SQL Server. But I wasn’t familiar with the rest of the suite nor was I as familiar as I needed to be with the “Essentials” release – a separate product that groups together the main features of System Center into a single offering for smaller organizations. These companies usually run with a smaller IT shop, so they sometimes opt for this product to help them monitor everything, including SQL Server. So I picked up “Microsoft System Center Enterprise Suite Unleashed” by Chris Amaris and a cast of others. I don’t normally like to get a technical book by multiple authors – I just find that most of the time it’s quite jarring to switch from author to author, but I think this group did pretty well here.  The first chapter on introducing System Center has helped me talk with others about what the product does, and which pieces fit well together with SQL Server. The writing is well done, and I didn’t find a jump from author to author as I went along. The information is sequential, meaning that they lead you from install to configuration and then use. It’s very much a concepts-and-how-to book, and a big one at that – over 950 pages of learning! It was a pretty quick read, though, since I skipped the installation parts and there are lots of screenshots. While I’m not sure you’d be an expert on the product when you finish reading this book, but I would say you’re more than halfway there. I would say it suits someone that learns through examples the best, since they have a lot of step-by-step examples I do recommend that you take a look if you have to interact with this product, or even if you are a smaller shop and you’re the primary IT resource. The last few chapters deal with System Center Essentials, and honestly it was the best part of the book for me. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Data Modeling Resources

    - by Dejan Sarka
    You can find many different data modeling resources. It is impossible to list all of them. I selected only the most valuable ones for me, and, of course, the ones I contributed to. Books Chris J. Date: An Introduction to Database Systems – IMO a “must” to understand the relational model correctly. Terry Halpin, Tony Morgan: Information Modeling and Relational Databases – meet the object-role modeling leaders. Chris J. Date, Nikos Lorentzos and Hugh Darwen: Time and Relational Theory, Second Edition: Temporal Databases in the Relational Model and SQL – all theory needed to manage temporal data. Louis Davidson, Jessica M. Moss: Pro SQL Server 2012 Relational Database Design and Implementation – the best SQL Server focused data modeling book I know by two of my friends. Dejan Sarka, et al.: MCITP Self-Paced Training Kit (Exam 70-441): Designing Database Solutions by Using Microsoft® SQL Server™ 2005 – SQL Server 2005 data modeling training kit. Most of the text is still valid for SQL Server 2008, 2008 R2, 2012 and 2014. Itzik Ben-Gan, Lubor Kollar, Dejan Sarka, Steve Kass: Inside Microsoft SQL Server 2008 T-SQL Querying – Steve wrote a chapter with mathematical background, and I added a chapter with theoretical introduction to the relational model. Itzik Ben-Gan, Dejan Sarka, Roger Wolter, Greg Low, Ed Katibah, Isaac Kunen: Inside Microsoft SQL Server 2008 T-SQL Programming – I added three chapters with theoretical introduction and practical solutions for the user-defined data types, dynamic schema and temporal data. Dejan Sarka, Matija Lah, Grega Jerkic: Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft SQL Server 2012 – my first two chapters are about data warehouse design and implementation. Courses Data Modeling Essentials – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Logical and Physical Modeling for Analytical Applications – online course I wrote for Pluralsight. Working with Temporal data in SQL Server – my latest Pluralsight course, where besides theory and implementation I introduce many original ways how to optimize temporal queries. Forthcoming presentations SQL Bits 12, July 17th – 19th, Telford, UK – I have a full-day pre-conference seminar Advanced Data Modeling Topics there.

    Read the article

  • What have my fellow Delphi programmers done to make Eclipse/Java more like Delphi?

    - by Robert Oschler
    I am a veteran Delphi programmer working on my first real Android app. I am using Eclipse and Java as my development environment. The thing I miss the most of course is Delphi's VCL components and the associated IDE tools for design-time editing and code creation. Fortunately I am finding Eclipse to be one hell of an IDE with it's lush context sensitive help, deep auto-complete and code wizard facilities, and other niceties. This is a huge double treat since it is free. However, here is an example of something in the Eclipse/Java environment that will give a Delphi programmer pause. I will use the simple case of adding an "on-click" code stub for an OK button. DELPHI Drop button on a form Double-click button on form and fill in the code that will fire when the button is clicked ECLIPSE Drop button on layout in the graphical XML file editor Add the View.OnClickListener interface to the containing class's "implements" list if not there already. (Command+1 on Macs, Ctrl + 1 on PCs I believe). Use Eclipse to automatically add the code stub for unimplemented methods needed to support the View.OnClickListener interface, thus creating the event handler function stub. Find the stub and fill it in. However, if you have more than one possible click event source then you will need to inspect the View parameter to see which View element triggered the OnClick() event, thus requiring a case statement to handle multiple click event sources. NOTE: I am relatively new to Eclipse/Java so if there is a much easier way of doing this please let me know. Now that work flow isn't all that terrible, but again, that's just the simplest of use cases. Ratchet up the amount of extra work and thinking for a more complex component (aka widget) and the large number of properties/events it might have. It won't be long before you miss dearly the Delphi intelligent property editor and other designers. Eclipse tries to cover this ground by having an extensive list of properties in the menu that pops up when you right-click over a component/widget in the XML graphical layout editor. That's a huge and welcome assist but it's just not even close to the convenience of the Delphi IDE. Let me be very clear. I absolutely am not ranting nor do I want to start a Delphi vs. Java ideology discussion. Android/Eclipse/Java is what it is and there is a lot that impresses me. What I want to know is what other Delphi programmers that made the switch to the Eclipse/Java IDE have done to make things more Delphi like, and not just to make component/widget event code creation easier but any programming task. For example: Clever tips/tricks Eclipse plugins you found other ideas? Any great blog posts or web resources on the topic are appreciated too. -- roschler

    Read the article

  • JCP.Next - Early Adopters of JCP 2.8

    - by Heather VanCura
    JCP.Next is a series of three JSRs (JSR 348, JSR 355 and JSR 358), to be defined through the JCP process itself, with the JCP Executive Committee serving as the Expert Group. The proposed JSRs will modify the JCP's processes  - the Process Document and Java Specification Participation Agreement (JSPA) and will apply to all new JSRs for all Java platforms.   The first - JCP.next.1, or more formally JSR 348, Towards a new version of the Java Community Process - was completed and put into effect in October 2011 as JCP 2.8. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. The second - JSR 355, Executive Committee Merge, is also Final. You can read the JCP 2.9 Process Document .  As part of the JSR 355 Final Release, the JCP Executive Committee published revisions to the JCP Process Document (version 2.9) and the EC Standing Rules (version 2.2).  The changes went into effect following the 2012 EC Elections in November. The third JSR 358, A major revision of the Java Community Process was submitted in June 2012.  This JSR will modify the Java Specification Participation Agreement (JSPA) as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons, the JCP EC (acting as the Expert Group for this JSR), expects to spend a considerable amount of time working on. The JSPA is defined by the JCP as "a one-year, renewable agreement between the Member and Oracle. The success of the Java community depends upon an open and transparent JCP program.  JSR 358, A major revision of the Java Community Process, is now in process and can be followed on java.net. The following JSRs and Spec Leads were the early adopters of JCP 2.8, who voluntarily migrated their JSRs from JCP 2.x to JCP 2.8 or above.  More candidates for 2012 JCP Star Spec Leads! JSR 236, Concurrency Utilities for Java EE (Anthony Lai/Oracle), migrated April 2012 JSR 308, Annotations on Java Types (Michael Ernst, Alex Buckley/Oracle), migrated September 2012 JSR 335, Lambda Expressions for the Java Programming Language (Brian Goetz/Oracle), migrated October 2012 JSR 337, Java SE 8 Release Contents (Mark Reinhold/Oracle) – EG Formation, migrated September 2012 JSR 338, Java Persistence 2.1 (Linda DeMichiel/Oracle), migrated January 2012 JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services (Santiago Pericas-Geertsen, Marek Potociar/Oracle), migrated July 2012 JSR 340, Java Servlet 3.1 Specification (Shing Wai Chan, Rajiv Mordani/Oracle), migrated August 2012 JSR 341, Expression Language 3.0 (Kin-man Chung/Oracle), migrated August 2012 JSR 343, Java Message Service 2.0 (Nigel Deakin/Oracle), migrated March 2012 JSR 344, JavaServer Faces 2.2 (Ed Burns/Oracle), migrated September 2012 JSR 345, Enterprise JavaBeans 3.2 (Marina Vatkina/Oracle), migrated February 2012 JSR 346, Contexts and Dependency Injection for Java EE 1.1 (Pete Muir/RedHat) – migrated December 2011

    Read the article

  • ATG Live Webcast April 5: Managing Your Oracle E-Business Suite with Oracle Enterprise Manager

    - by BillSawyer
    The next ATG Live Webcast covers one of the hottest topic areas in E-Business Suite Tools and Technology: Lifecycle Management. Angelo Rosado, Product Manager, ATG Development will lead you through using Oracle Enterprise Manager 12c and the latest E-Business Suite Plug-in to manage E-Business Suite systems. You can register for the Apr. 5, 2012 event at: Managing Your Oracle E-Business Suite with Oracle Enterprise Manager The topics covered in this webcast will be: Manage your EBS system configurations Monitor your EBS environment's performance and uptime Keep multiple EBS environments in sync with their patches and configurations Create patches for your EBS customizations and apply them with Oracle's own patching tools Date:               Thursday, April 5, 2012Time:              8:00 AM - 9:00 AM Pacific Standard TimePresenter:    Angelo Rosado, Product Manager, ATG DevelopmentWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:            877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              99342To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  597073984If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com. 

    Read the article

  • Nokia vs. The World

    - by Michael B. McLaughlin
    I’m looking forward to the launch of the Nokia Lumia 920. Why? Well, it stacks up better than the competition for one thing. Then there’s also that security problem that certain other phones have. Mostly, though, it’s because I love my Lumia 900 and the 920, with Windows Phone 8, will be even better. Before I got my Lumia 900, I just took it as given that smart phone cameras couldn’t be good. The Lumia taught me that smart phone cameras can be good if the manufacturer treats them as an important component worth spending time and money on (rather than some thing that consumers expect such that they’d better throw one in). I’m extremely pleased with the quality of pictures that my Lumia 900 gives me as well as the range of settings it provides (you can delve in to tell it a film speed, an f-stop, and a whole range of other settings). And the image stabilization features in the Lumia 920 deliver far better results than the others. Nokia has had great maps for a long time and they continue to improve. Even better, they made a deal that puts many of their excellent maps into Windows Phone 8 itself. There are still Nokia-exclusive features such as Nokia City Lens, of course. But by giving the core OS a great set of fundamental map data and technologies, they help ensure that customers know that buying a Windows Phone 8 will give them a great map experience no matter who made the phone. I’ll be getting a 920, myself, but the HTC and Samsung devices that have been announced have some compelling features, too, and it’s great to know that people who buy one of these won’t need to worry about where their maps might lead them. I’m looking forward to the NFC capabilities and Qi wireless charging my Lumia 920 will have. With the availability of DirectX and C++ programming on Windows Phone 8, I’m also excited about all the great games that will be added to the Windows Phone environment. I love my Xbox Phone. I love my Office phone. I love my Facebook phone. I love my GPS phone. I love my camera phone. I love my SkyDrive phone. In short, I love my Windows Phone!

    Read the article

  • Digital Storage for Airline Entertainment

    - by Bill Evjen
    by Thomas Coughlin Common flash memory cards The most common flash memory products currently in use are SD cards and derivative products (e.g. mini and micro-SD cards) Some compact flash used for professional applications (such as DSLR cameras) Evolution of leading flash formats Standardization –> market expansion Market expansion –> volume iNAND –> focus is on enabling embedded X3 iSSD –> ideal for thin form factor devices Flash memory applications Phones are the #1 user of flash memory Flash memory is used as embedded and removable storage in many mobile applications Flash memory is being used in computers as USB sticks and SSDs Possible use of flash memory in computer combined with HDDs (hybrid HDDs and paired or dual storage computers) It can be a removable card or an embedded card These devices can only handle a specific number of writes Flash memory reads considerably quicker than hard drives Hybrid and dual storage in computers SSDs can provide fast performance but they are expensive HDDs can provide cheap storage but they are relatively slow Combining some flash memory with a HDD can provide costs close to those of HDDs and performance close to flash memory Seagate Momentus XT hybrid HDD Various dual storage offerings putting flash memory with HDDs Other common flash memory devices USB sticks All forms and colors Used for moving files around Some sold with content on them (Sony Movies on USB sticks) Solid State Drives (SSDs) Floating Gate Flash Memory Cell When a bit is programmed, electrons are stored upon the floating gate This has the effect of offsetting the charge on the control gate of the transistor If there is no charge upon the floating gate, then the control gate’s charge determines whether or not a current flows through the channel A strong charge on the control gate assumes that no current flows. A weak charge will allow a strong current to flow through. Similar to HDDs, flash memory must provide: Bit error correction Bad block management NAND and NOR memories are treated differently when it comes to managing wear In many NOR-based systems no management is used at all, since the NOR is simply used to store code, and data is stored in other devices. In this case, it would take a near-infinite amount of time for wear to become an issue since the only time the chip would see an erase/write cycle is when the code in the system is being upgraded, which rarely if ever happens over the life of a typical system. NAND is usually found in very different application than is NOR Flash memory wears out This is expected to get worse over time Retention: Disappearing data Bits fade away Retention decreases with increasing read/writes Bits may change when adjacent bits are read Time and traffic are concerns Controllers typically groom read disturb errors Like DRAM refresh Increases erase/write frequency Application characteristics Music – reads high / writes very low Video – r high / writes very low Internet Cache – r high / writes low On airplanes Many consumers now have their own content viewing devices – do they need the airlines? Is there a way to offer more to consumers, especially with their own viewers Additional special content tie into airplane network access to electrical power, internet Should there be fixed embedded or removable storage for on-board airline entertainment? Is there a way to leverage personal and airline viewers and content in new and entertaining ways?

    Read the article

  • Tellago announces SQL Server 2008 R2 BI quick adoption programs

    - by Vishal
    During the last year, we (Tellago) have been involved in various business intelligence initiatives that leverage some emerging BI techniques such as self-service BI or complex event processing (CEP). Specifically, in the last few months, we have partnered with Microsoft to deliver a series of events across the country where we present the different technologies of the SQL Server 2008 R2 BI stack such as PowerPivot, StreamInsight, Ad-Hoc Reporting and Master Data Services. As part of those events, we try to go beyond the traditional technology presentation and provide a series of best practices and lessons we have learned on real world BI projects that leverage these technologies. Now that SQL Server 2008 R2 has been released to manufacturing, we have launched a series of quick adoption programs that are designed to help customers understand how they can embrace the newest additions to Microsoft's BI stack as part of their IT initiatives. The programs are also designed to help customers understand how the new SQL Server features interact with established technologies such as SQL Server Analysis Services or SQL Server Integration Services. We try to keep these adoption programs very practical by doing a lot of prototyping and design sessions that will give our customers a practical glimpse of the capabilities of the technologies and how they can fit in their enterprise architecture roadmap. Here is our official announcement (you can blame my business partner, BI enthusiast, and Tellago's CEO Elizabeth Redding for the marketing pitch ;)): Tellago Marks Microsoft's SQL Server 2008 R2 Launch With Business Intelligence Quick Adoption Program Microsoft launched SQL Server 2008 R2 last week, which delivers several breakthrough business intelligence (BI) capabilities that enable organizations to:  Efficiently process, analyze and mine data Improve IT and developer efficiency Enable highly scalable and well-managed Business Intelligence on a self-service basis for business users The release offers a new feature called PowerPivot, which enables self service BI through connecting business users directly to enterprise data sources and providing improved reporting and analytics. The release also offers Master Data Management which helps enterprises centrally manage critical data assets company-wide and across diverse systems, enabling increased integrity of information over time. Finally, the release includes StreamInsight, which is a framework for implementing Complex Event Processing (CEP) applications on the Microsoft platform. With StreamInsight, IT organizations can implement the infrastructure to process a large volume of events near real time, execute continuous queries against event streams and enable real time business intelligence. As a thought leader in the Business Intelligence community, Tellago has recognized the occasion by launching a series of quick adoption programs to enable the adoption of this new BI technology stack in your enterprise. Our Quick Adoption programs are designed to help you: Brainstorm BI solution options  Architect initial infrastructure components Prototype key features of a solution As a 2-3 day program, our approach is more efficient and cost effective than a traditional Proof of Concept because it allows you to understand the new SQL Server 2008 R2 feature set  while seeing directly how you can leverage it for your business intelligence needs. If you are interested in learning more about the BI capabilities of Microsoft's Business Intelligence stack, including SQL Server 2008 R2, we can help.  As industry experts and software content advisers to Microsoft, Tellago is the place where ideas meet technology expertise.  Let us help you see for yourself the advantages that you can gain from Microsoft's  SQL Server 2008 R2. Email or call for more information - [email protected] or 847-925-2399.

    Read the article

  • What micro web-framework has the lowest overhead but includes templating

    - by Simon Martin
    I want to rewrite a simple small (10 page) website and besides a contact form it could be written in pure html. It is currently built with classic asp and Dreamweaver templates. The reason I'm not simply writing 10 html pages is that I want to keep the layout all in 1 place so would need either includes or a masterpage. I don't want to use Dreamweaver templates, or batch processing (like org-mode) because I want to be able to edit using notepad (or Visual Studio) because occasionally I might need to edit a file on the server (Go Daddy's IIS admin interface will let me edit text). I don't want to use ASP.NET MVC or WebForms (which I use in my day job) because I don't need all the overhead they bring with them when essentially I'm serving up 9 static files, 1 contact form and 1 list of clubs (that I aim to use jQuery to filter). The shared hosting package I have on Go Daddy seems to take a long time to spin up when serving aspx files. Currently the clubs page is driven from an MS SQL database that I try to keep up to date by manually checking the dojo locator on the main HQ pages and editing the entries myself, this is again way over the top. I aim to get a text file with the club details (probably in JSON or xml format) and use that as the source for the clubs page. There will need to be a bit of programming for this as the HQ site is unable to provide an extract / feed so something will have to scrape the site periodically to update my clubs persistence file. I'd like that to be automated - but I'm happy to have that triggered on a visit to the clubs page so I don't need to worry about scheduling a job. I would probably have a separate process that updates the persistence that has nothing to do with the rest of the site. Ideally I'd like to use Mercurial (or git) to publish, I know Bitbucket (and github) both serve static page sites so they wouldn't work in this scenario (dynamic pages and a contact form) but that's the model I'd like to use if there is such a thing. My requirements are: Simple templating system, 1 place to define header, footers, menu etc., that can be edited using just notepad. Very minimal / lightweight framework. I don't need a monster for 10 pages Must run either on IIS7 (shared Go Daddy Windows hosting) or other free host

    Read the article

  • Using MVP, how to create a view from another view, linked with the same model object

    - by Dinaiz
    Background We use the Model-View-Presenter design pattern along with the abstract factory pattern and the "signal/slot" pattern in our application, to fullfill 2 main requirements Enhance testability (very lightweight GUI, every action can be simulated in unit tests) Make the "view" totally independant from the rest, so we can change the actual view implementation, without changing anything else In order to do so our code is divided in 4 layers : Core : which holds the model Presenter : which manages interactions between the view interfaces (see bellow) and the core View Interfaces : they define the signals and slots for a View, but not the implementation Views : the actual implementation of the views When the presenter creates or deals with views, it uses an abstract factory and only knows about the view interfaces. It does the signal/slot binding between views interfaces. It doesn't care about the actual implementation. In the "views" layer, we have a concrete factory which deals with implementations. The signal/slot mechanism is implemented using a custom framework built upon boost::function. Really, what we have is something like that : http://martinfowler.com/eaaDev/PassiveScreen.html Everything works fine. The problem However, there's a problem I don't know how to solve. Let's take for example a very simple drag and drop example. I have two ContainersViews (ContainerView1, ContainerView2). ContainerView1 has an ItemView1. I drag the ItemView1 from ContainerView1 to ContainerView2. ContainerView2 must create an ItemView2, of a different type, but which "points" to the same model object as ItemView1. So the ContainerView2 gets a callback called for the drop action with ItemView1 as a parameter. It calls ContainerPresenterB passing it ItemViewB In this case we are only dealing with views. In MVP-PV, views aren't supposed to know anything about the presenter nor the model, right ? How can I create the ItemView2 from the ItemView1, not knowing which model object is ItemView1 representing ? I thought about adding an "itemId" to every view, this id being the id of the core object the view represents. So in pseudo code, ContainerPresenter2 would do something like itemView2=abstractWidgetFactory.createItemView2(); this.add(itemView2,itemView1.getCoreObjectId()) I don't get too much into details. That just work. The problem I have here is that those itemIds are just like pointers. And pointers can be dangling. Imagine that by mistake, I delete itemView1, and this deletes coreObject1. The itemView2 will have a coreObjectId which represents an invalid coreObject. Isn't there a more elegant and "bulletproof" solution ? Even though I never did ObjectiveC or macOSX programming, I couldn't help but notice that our framework is very similar to Cocoa framework. How do they deal with this kind of problem ? Couldn't find more in-depth information about that on google. If someone could shed some light on this. I hope this question isn't too confusing ...

    Read the article

  • Code refactoring with Visual Studio 2010 Part-1

    - by Jalpesh P. Vadgama
    Visual studio 2010 is a Great IDE(Integrated Development Environment) and we all are using it in day by day for our coding purpose. There are many great features provided by Visual Studio 2010 and Today I am going to show one of great feature called for code refactoring. This feature is one of the most unappreciated features of Visual Studio 2010 as lots of people still not using that and doing stuff manfully. So to explain feature let’s create a simple console application which will print first name and last name like following. And following is code for that. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; Console.WriteLine(string.Format("FirstName:{0}",firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } So as you can see this is a very basic console application and let’s run it to see output. So now lets explore our first feature called extract method in visual studio you can also do that via refractor menu like following. Just select the code for which you want to extract method and then click refractor menu and then click extract method. Now I am selecting three lines of code and clicking on refactor –> Extract Method just like following. Once you click menu a dialog box will appear like following. As you can I have highlighted two thing first is Method Name where I put Print as Method Name and another one Preview method signature where its smart enough to extract parameter also as We have just selected three lines with  console.writeline.  One you click ok it will extract the method and you code will be like this. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; Print(firstName, lastName); } private static void Print(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } So as you can see in above code its has created a static method called Print and also passed parameter for as firstname and lastname. Isn’t that great!!!. It has also created static print method as I am calling it from static void main.  Hope you liked it.. Stay tuned for more..Till that Happy programming.

    Read the article

  • Change default language settings in Visual Studio 2012

    - by sreejukg
    The first thing you need to do after the installation of Visual Studio 2012 is to choose the IDE preferences. Once you select your preferred collection of settings, the IDE will always choose dialogs and other options according to your selection. Nowadays developer’s needs to work with different programming environments and due to this, developers might need to reset the default settings. In this article, I am going to demonstrate how you can change the default settings in Visual Studio 2012. For the purpose of this demonstration, I have installed Visual Studio 2012 and selected C++ as my default environment settings. So now when I go to file -> new project, it will give me C++ templates by default as follows. If you want to select another language, you need to expand Other Languages section and select C# or VB. Now I am going to change these default settings. I am going to change the default language preference to C#. In Visual Studio 2012, go to tools menu and select Import and Export Settings. Here you have 3 options; one is to export the current settings so that the settings are saved for future use. Also you can import previously saved settings. The last option available is to reset it to default. It is a good Idea to export your settings and import it as you need in later stages. To reset the settings to default select the Reset option and click next. Now Visual Studio will ask you to whether you would like to save the settings, which can be used in future to restore. Select any one option and click next. For the purpose of this demo, I have selected not to save the settings. Click Next button to continue. Now Visual Studio will bring you the similar dialog that appears just after installation to select your IDE settings. Select the required settings from the available list and click Finish button. Click Finish once you are done. If everything OK, you will see the success message as below. Now go to file -> new Project, you will see the selected language appear by default. I selected C# in the previous step and the new project dialog appears as follows. Changing IDE settings in Visual Studio 2012 is very easy and straight forward.

    Read the article

  • Oracle OpenWorld Preview: Get Your Hands Dirty with Oracle WebCenter

    - by Christie Flanagan
    Feel like getting your hands dirty with Oracle WebCenter during Oracle OpenWorld next week?  Roll up your sleeves and sharpen you skills sets by mastering Oracle WebCenter technology in one of our Hand-On Labs.  These labs are self-paced, practical learning sessions where you’re guaranteed to discover new ways to derive maximum benefits from Oracle WebCenter.  Experts will be available in person to answer questions and guide you through each lab. HOL10208 - Add Social Capabilities to Your Enterprise Applications Monday, Oct 1, 12:15 PM - 1:15 PM - Marriott Marquis - Salon 1/2 Oracle Social Network enables you to add real-time collaboration capabilities into your enterprise applications, so that conversations can happen directly within your business systems. In this hands-on lab, you will try out the Oracle Social Network product to collaborate with other attendees, using real-time conversations with document sharing capabilities. Next you will embed social capabilities into a sample Web-based enterprise application, using embedded UI components. Experts will also write simple REST-based integrations, using the Oracle Social Network API to programmatically create social interactions.HOL10194 - Enterprise Content Management Simplified: Oracle WebCenter Content’s Next-Generation UI Tuesday, Oct 2, 11:45 AM - 12:45 PM - Marriott Marquis - Salon 1/2Regardless of the nature of your business, unstructured content underpins many of its daily functions. Whether you are working with traditional presentations, spreadsheets, or text documents—or even with digital assets such as images and multimedia files—your content needs to be accessible and manageable in convenient and intuitive ways to make working with the content easier. Additionally, you need the ability to easily share documents with coworkers to facilitate a collaborative working environment. Come to this session to see how Oracle WebCenter Content’s next-generation user interface helps modern knowledge workers easily manage personal and enterprise documents in a collaborative environment.HOL10207 - Build an Intranet Portal with Oracle WebCenter Tuesday, Oct 2, 1:15 PM - 2:15 PM - Marriott Marquis - Salon 1/2 Wednesday, Oct 3, 3:30 PM - 4:30 PM - Marriott Marquis - Salon 1/2In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity.HOL10206 - Oracle WebCenter Sites 11g: Transforming the Content Contributor Experience Wednesday, Oct 3, 5:00 PM - 6:00 PM - Marriott Marquis - Salon 1/2Oracle WebCenter Sites 11g makes it easy for marketers and business users to contribute to and manage Websites with the new visual, contextual, and intuitive Web authoring interface. In this hands-on lab, you will create and manage content for a sports-themed Website, using many of the new and enhanced features of the 11g release. See Your Favorite WebCenter Products in Action Visit us in the exhibition hall to see demonstrations of WebCenter products.  Demo pod locations are in Moscone South, Right: Oracle Social Network: S-244 Oracle WebCenter Content: S-246, S245 Oracle WebCenter Sites: S-247 Oracle WebCenter Portal: S-249 More Info: Oracle OpenWorld Oracle WebCenter Focus On Guide Oracle Customer Experience Summit @ OpenWorld

    Read the article

  • CISDI Cloud - Industrial Cloud Computing Platform based on Oracle Products

    - by Wenyu Duan
    In today's era, Cloud Computing is becoming integral to the vision and corporate strategy of leading organizations and is often seen as a key business driver to achieve growth and innovation. Headquartered in Chongqing, China, CISDI Engineering Co., Ltd. is a large state-owned engineering company, offering consulting, engineering design, EPC contracting, and equipment integration services to steel producers all over the world. With over 50 years of experience, CISDI offers quality services for every aspect of production for projects in the metal industry and the company has evolved into a leading international engineering service group with 18 subsidiaries providing complete lifecycle for E&C projects. CISDI group delegation led by Mr. Zhaohui Yu, CEO of CISDI Group, Mr. Zhiyou Li, CEO of CISDI Info, Mr. Qing Peng, CTO of CISDI Info and Mr. Xin Xiao, Head of CISDI Info's R&D joined Oracle OpenWorld 2012 and presented a very impressive cloud initiative case in their session titled “E&C Industry Solution in CISDI Cloud - An Industrial Cloud Computing Platform Based on Oracle Products”. CISDI group plans to expand through three phases in the construction of its cloud computing platform: first, it will relocate its existing technologies to Oracle systems, along with establishing private cloud for CISDI; secondly, it will gradually provide mixed cloud services for its subsidiaries and partners; and finally it plans to launch an industrial cloud with a highly mature, secure and scalable environment providing cloud services for customers in the engineering construction and steel industries, among others. “CISDI Cloud” will become the growth engine for the organization to expand its global reach through online services and achieving the strategic objective of being the preferred choice of E&C companies worldwide. The new cloud computing platform is designed to provide access to the shared computing resources pool in a self-service, dynamic, elastic and measurable way. It’s flexible and scalable grid structure can support elastic expansion and sustainable growth, and can bring significant benefits in speed, agility and efficiency. Further, the platform can greatly cut down deployment and maintenance costs. CISDI delegation highlighted these points as the key reasons why the group decided to have a strategic collaboration with Oracle for building this world class industrial cloud - - Oracle’s strategy: Open, Complete and Integrated - Oracle as the only company who can provide engineered system, with complete product chain of hardware and software - Exadata, Exalogic, EM 12c to provide solid foundation for "CISDI Cloud" The cloud blueprint and advanced architecture for industrial cloud computing platform presented in the session shows how Oracle products and technologies together with industrial applications from CISDI can provide end-end portfolio of E&C industry services in cloud. CISDI group was recognized for business leadership and innovative solutions and was presented with Engineering and Construction Industry Excellence Award during Oracle OpenWorld.

    Read the article

  • Code refactoring with Visual Studio 2010 Part-2

    - by Jalpesh P. Vadgama
    In previous post I have written about Extract Method Code refactoring option. In this post I am going to some other code refactoring features of Visual Studio 2010.  Renaming variables and methods is one of the most difficult task for a developer. Normally we do like this. First we will rename method or variable and then we will find all the references then do remaining over that stuff. This will be become difficult if your variable or method are referenced at so many files and so many place. But once you use refactor menu rename it will be bit Easy. I am going to use same code which I have created in my previous post. I am just once again putting that code here for your reference. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; Print(firstName, lastName); } private static void Print(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } Now I want to rename print method in this code. To rename the method you can select method name and then select Refactor-> Rename . Once I selected Print method and then click on rename a dialog box will appear like following. Now I am renaming this Print method to PrintMyName like following.   Now once you click OK a dialog will appear with preview of code like following. It will show preview of code. Now once you click apply. You code will be changed like following. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; PrintMyName(firstName, lastName); } private static void PrintMyName(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } So that’s it. This will work in multiple files also. Hope you liked it.. Stay tuned for more.. Till that Happy Programming.

    Read the article

< Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >