Search Results

Search found 11573 results on 463 pages for 'store'.

Page 250/463 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Oracle Retail Industry Forum Europe 2014 – Registration Now Open!

    - by Marie-Christin Hansen-Oracle
    We are delighted to announce that registration for the 4th annual Oracle Retail Industry Forum Europe (ORIF Europe) is now open. The event is being held from 10-11 September at the Renaissance St Pancras Hotel in London. ORIF Europe is a must attend event for Oracle Retail customers, retailers who are about to embark on an Oracle implementation, or for those who simply wish to learn more about Oracle Retail solutions and how they support the provision of commerce anywhere. Further details will be announced over the coming weeks, but already confirmed as speakers are: Paul Hornby, Head of eCommerce at Shop Direct, who will discuss the company’s ambitions, challenges faced and the strategy undertaken by the team in driving the business from a catalogue-based to a web-based commerce business. The session will reveal how Shop Direct and Oracle Retail are working together to achieve the transformation of this business into a world-class digital retailer, by building a foundation for future growth for each of its individual brands and target markets. Kate Ancketill, CEO and Founder of GDR Creative Intelligence who will illustrate what best-in-market 'Access Anywhere' retail looks like. From individual retail and next generation personalisation of in-store service, to the land grab for delivery innovation, cutting edge brands are 'training' consumers to check into stores in exchange for concrete benefits. Kate will explore the opportunity this is opening up across the retail landscape. Register for the Oracle Retail Industry Forum today to secure your place.

    Read the article

  • Design for XML mapping scenarios between two different systems [on hold]

    - by deepak_prn
    Mapping XML fields between two systems is a mundane routine in integration scenarios. I am trying to make the design documents look better and provide clear understanding to the developers especially when we do not use XSLT or any other IDE such as jDeveloper or eclipse plugins. I want it to be a high level design but at the same time talk in developer's language. So that there is no requirements that slip under the crack. For example, one of the scenarios goes: the store cashier sells an item, the transaction data is sent to Data management system. Now, I am writing a functional design for the scenario which deals with mapping XML fields between our system and the data management system. Question : I was wondering if some one had to deal with mapping XML fields between two systems? (without XSLT being involved) and if you used a table to represent the fields mapping (example is below) or any other visualization tool which does not break the bank ? I am trying to find out if there is a better way to represent XML mapping in your design documents. The widely accepted and used method seems to be using a simple table such as in the picture to illustrate the mapping. I am wondering if there are alternate ways/ tools to represent such as in Altova:

    Read the article

  • Ask The Readers: How Do You Camouflage Your Tech?

    - by Jason Fitzpatrick
    We love having a technology-packed house as much as the next geek, but not all our gizmos, gadgets, and peripherals are exactly Home and Garden approved. How do you enjoy all your tech without your living room and office looking like an electronics store? Image courtesy of Weekly Geek’s DIY charging station tutorial. Whether it’s to hide the insanely intense LEDS, minimize the visual clutter, or to boost the wife/husband acceptance factor of your geeky hobbies higher, there’s a variety of reasons for wrangling cables, hiding routers, or otherwise camouflaging your gear. This week we want to hear all about your tips for hiding or otherwise minimizing the appearance of gear around your home, office, and other personal spaces. Sound off in the comments with your best tips, trick, and camouflaging techniques; check back in on Friday for the What You Said roundup. HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization HTG Explains: What Is RSS and How Can I Benefit From Using It?

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • Why isn't my lighting working properly? Are my normals messed up?

    - by Radek Slupik
    I'm relatively new to OpenGL and I am trying to draw a 3D model (loaded from a 3ds file using lib3ds) using OpenGL with lighting, but about half of it is drawn in black. I set up the light as such: glEnable(GL_LIGHTING); glShadeModel(GL_SMOOTH); GLfloat ambientColor[] = {0.2f, 0.2f, 0.2f, 1.0f}; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor); glEnable(GL_LIGHT0); GLfloat lightColor0[] = {1.0f, 1.0f, 1.0f, 1.0f}; GLfloat lightPos0[] = {4.0f, 0.0f, 8.0f, 0.0f}; glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0); glLightfv(GL_LIGHT0, GL_POSITION, lightPos0); The model is in a VBO and drawn using glDrawArrays. The normals are in a separate VBO, and the normals are calculated using lib3ds_mesh_calculate_vertex_normals: std::vector<std::array<float, 3>> normals; for (std::size_t i = 0; i < model->nmeshes; ++i) { auto& mesh = *model->meshes[i]; std::vector<float[3]> vertex_normals(mesh.nfaces * 3); lib3ds_mesh_calculate_vertex_normals(&mesh, vertex_normals.data()); for (std::size_t j = 0; j < mesh.nfaces; ++j) { auto& face = mesh.faces[j]; normals.push_back(make_array(vertex_normals[j])); } } glBindBuffer(GL_ARRAY_BUFFER, normal_vbo_); glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(decltype(normals)::value_type), normals.data(), GL_STATIC_DRAW); The problem isn't the vertices; the model is drawn correctly when drawing it as a wireframe. I also fixed the normals in Blender using controlN. What could be the problem? Should I store the normals in a different order?

    Read the article

  • Strategy for managing lots of pictures for a website

    - by Nate
    I'm starting a new website that will (hopefully) have a lot of user generated pictures. I'm trying to figure out the best way to store and serve these pictures. The CMS I'm using (umbraco) has a media library that puts a folder on the server for each image. Inside of there you can have different sizes of that same image. That folder has an ID on it and the database has additional information for that image along with the ID of the folder. This works great for small sites, but what if the pictures get up to 10,000, 100,000 or 1,000,000? It seems like the lookup on the directory would take a long time to find the correct folder. I'm on windows 2008 if that makes a difference. I'm not so worried about load. I can load balance my server pretty easily and replicate the images across the servers. The nature of the site won't have a lot of users on it either, but it could have a lot of pics. Thanks. -Nate EDIT After some thought I think I'm going to create a directory for each user under a root image folder then have user's pictures under that. I would be pretty stoked if I had even 5,000 users, so that shouldn't be too bad of a linear lookup. If it does get slow I will break it down into folders like /media/a/adam/image123.png. If it ever gets really big I will expand the above method to build a bigger tree. That would take a LOT of content though.

    Read the article

  • SOA Suite Demo System updated – make use of Oracle hosted demo systems or download the image

    - by JuergenKress
    To get access to the demo environments please contact OPN! Global Sales Engineering (GSE, formerly DSS) is happy to announce the availability of the SOA 11g (11.1.1.8) Platform. The Platform is fully featured, based on plug and play architecture, and designed to build best-of-breed SOA & Business Process Management 11g demos. Demo Highlights Designed & Developed on the "Build your own demos (POC)" concept Installed, configured latest versions of FMW products SOA, Business Activity Monitoring (BAM), Oracle Service Bus (OSB), Oracle Enterprise Repository (OER), Oracle Event Processor (OEP), Oracle Service Registry (OSR), WebCenter Content and WebCenter Portal Platform is designed & tuned for best performance Hot plug-in capability for additional middleware components Call to Action Check out the 1 minute video overview of this SOA 11g (11.1.1.8) Platform Review the latest Release Notes & other collaterals on Demo Store Visit the GSE home page to book the “SOA 11.1.1.8.0 Platform” Customizable demo Additional information is available on this page. For questions or feedback please contact [email protected] or [email protected]. This announcement will appear in the archive as Number 453. Support If you need assistance or encounter any issues please submit a GSE Repository ticket or call the GSE Support Hotline for assistance. The GSE Support Hotline is available 24 hours a day, Monday through Friday, at: US/CAN: +1.650.506.8763 or EMEA: +44 118 9240808 or APAC: +65.6436.2150 or LAD: +1.650.506.8763 or Japan: +81-3-6834-6097. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA demo,demo system,sales. pre-sales,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • NoVa Code Camp 2010.1 &ndash; Don&rsquo;t Miss It!

    - by John Blumenauer
    Tomorrow, June 12th will be the NoVa Code Camp 2010.1 held at the Microsoft Technical Center in Reston, VA.  What’s in store?  Lots of great topics by some truly knowledgeable speakers from the mid-Atlantic region.  This event will have four talks alone on Azure, plus sessions ASP.NET MVC2, SharePoint, WP7, Silverlight, MEF, WCF and some great presentations centered around best practices and design. The schedule can be found at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Schedule/tabid/202/Default.aspx The session descriptions and speaker list is at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Sessions/tabid/197/Default.aspx We’re also fortunate this year to have several excellent sponsors.  The sponsor list can be found at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Sponsors/tabid/198/Default.aspx.  As a result of the excellent sponsors, attendees will be enjoying nice food throughout the day and the end of day raffle will have some great surprises regarding swag! I’ll be presenting MEF with an introduction and then how it can be used to extend Silverlight applications.  If you’re new to MEF and/or Silverlight, don’t worry.  I’ll be easing into the concepts so everyone will leave an understanding of MEF by the end of the session.   Don’t miss NoVa Code Camp 2010.1.  See YOU there!

    Read the article

  • Strategy for managing lots of pictures for a website

    - by Nate
    I'm starting a new website that will (hopefully) have a lot of user generated pictures. I'm trying to figure out the best way to store and serve these pictures. The CMS I'm using (umbraco) has a media library that puts a folder on the server for each image. Inside of there you can have different sizes of that same image. That folder has an ID on it and the database has additional information for that image along with the ID of the folder. This works great for small sites, but what if the pictures get up to 10,000, 100,000 or 1,000,000? It seems like the lookup on the directory would take a long time to find the correct folder. I'm on windows 2008 if that makes a difference. I'm not so worried about load. I can load balance my server pretty easily and replicate the images across the servers. The nature of the site won't have a lot of users on it either, but it could have a lot of pics. Thanks. -Nate EDIT After some thought I think I'm going to create a directory for each user under a root image folder then have user's pictures under that. I would be pretty stoked if I had even 5,000 users, so that shouldn't be too bad of a linear lookup. If it does get slow I will break it down into folders like /media/a/adam/image123.png. If it ever gets really big I will expand the above method to build a bigger tree. That would take a LOT of content though.

    Read the article

  • Windows 8 Apps Unleashed Now in Bookstores!

    - by Stephen.Walther
    My book Windows 8 Apps with HTML5 and JavaScript Unleashed is now in bookstores! Learn how to create Metro apps Windows 8 apps with JavaScript. And the book is in color! All of the code listings and illustrations are in color. Why build Windows 8 apps? When you create a Windows 8 app, you can put your app in the Windows 8 Store. In other words, customers can buy your app directly from Windows. Think iPhone apps, but for a much larger market. In my book, I explain how you can create both game apps and simple productivity apps by creating Windows 8 apps with JavaScript. The book is a short read and I include plenty of code samples that have been tested against the final release of Windows 8. You can buy the book by going to your local Barnes & Noble bookstore or you can buy the book through Amazon by using the following link: It looks like the book is also available for the Kindle: Kindle: Windows 8 Apps with HTML5 and JavaScript Unleashed

    Read the article

  • Crontab -e gives me error messages

    - by DNA
    I get a bunch of error messages when I run crontab -e Here are the error messages. And here is my crontab file under `/usr/bin/': # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 30 * * * * root rsync /home/dnaneet/Downloads/*.pdf /home/dnaneet/Downloads/pdfs/ # I notice that the last task ('rsync') NEVER RUNS! Why is this happening? What did I do wrong? Running Ubuntu 11.10/Bash. I have read this... Am I missing a shebang? And I don't know if my anacron jobs run. Edit 1 In light of Masi's comment, I commented out lines 17 thru 25 of my crontab file with #. Now when I run sudo crontab -e, all I get is: /usr/bin/crontab: 11: 17: not found /usr/bin/crontab: 12: 25: not found (gedit:4301): Gtk-WARNING **: Attempting to store changes into `/root/.local/share/recently-used.xbel', but failed: Failed to create file '/root/.local/share/recently-used.xbel.GOHVBW': No such file or directory (gedit:4301): Gtk-WARNING **: Attempting to set the permissions of `/root/.local/share/recently-used.xbel', but failed: No such file or directory What in the world?

    Read the article

  • ASP.NET ViewState Tips and Tricks #1

    - by João Angelo
    In User Controls or Custom Controls DO NOT use ViewState to store non public properties. Persisting non public properties in ViewState results in loss of functionality if the Page hosting the controls has ViewState disabled since it can no longer reset values of non public properties on page load. Example: public class ExampleControl : WebControl { private const string PublicViewStateKey = "Example_Public"; private const string NonPublicViewStateKey = "Example_NonPublic"; // DO public int Public { get { object o = this.ViewState[PublicViewStateKey]; if (o == null) return default(int); return (int)o; } set { this.ViewState[PublicViewStateKey] = value; } } // DO NOT private int NonPublic { get { object o = this.ViewState[NonPublicViewStateKey]; if (o == null) return default(int); return (int)o; } set { this.ViewState[NonPublicViewStateKey] = value; } } } // Page with ViewState disabled public partial class ExamplePage : Page { protected override void OnLoad(EventArgs e) { base.OnLoad(e); this.Example.Public = 10; // Restore Public value this.Example.NonPublic = 20; // Compile Error! } }

    Read the article

  • Working with data and meta data that are separated on different servers

    - by afuzzyllama
    While developing a product, I've come across a situation where my group wants to store meta data for data entry forms (questions, layout, etc) in a different database then the database where the collected data is stored. This is mostly for security because we want to be able to have our meta data public facing, while keeping collected data as secure as possible. I was thinking about writing a web service that provides the meta information that the data collection program could access. The only issue I see with this approach is the front end is going to have to match the meta data with the collected data, which would be more efficient as a join on the back end. Currently, this system is slated to run on .NET and MSSQL. I haven't played around with .NET libraries running in SQL, but I'm considering trying to create logic that would pull from the web service, convert the meta data into a table that SQL can join on, and return the combined data and meta data that way. Is this solution the wrong way to approach the problem? Is there a pattern or "industry standard" way of bringing together two datasets that don't live in the same database?

    Read the article

  • Database Replication check script not running

    - by Tarun
    I'm trying to create a Database Replication checking script but I'm getting error while executing it. Here is the script #!/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin export PATH #Server Name Server="Test Server" #My Sql Username and Password User=root Password="a" #Maximum Slave Time Delay Delay="60" #File Path to store error and email the same Log_File=/tmp/replicationcheck.txt #Email Settings Subject="$Server Replication Error" Sender_Name=TestServer Recipients="[email protected]" #Mail Alert Function mailalert(){ sendmail -F $Sender_Name -it <<END_MESSAGE To: $Recipients Subject: $Subject $Message_Replication_Error `/bin/cat $Log_File` END_MESSAGE } #Show Slave Status Show_Slave_Status=`echo "show slave status \G;" | mysql -u $User -p$Password 2>&1` #Getting list of queries in mysql $Show_Slave_Status | grep "Last_" > $Log_File #Check if slave running $Show_Slave_Status | grep "Slave_IO_Running: No" if [ "$?" -eq "0" ]; then Message_Replication_Error="$Server Replication error please check. The Slave_IO_Running state is No." mailalert exit 1 else $Show_Slave_Status | grep "Slave_IO_Running: Connecting" if [ "$?" -eq "0" ]; then Message_Replication_Error="$Server Replication error please check. The Slave_IO_Running state is Connecting." mailalert exit 1 fi fi #Check if replication delayed Seconds_Behind_Master=$Show_Slave_Status | grep "Seconds_Behind_Master" | awk -F": " {' print $2 '} if [ "$Seconds_Behind_Master" -ge "$Delay" ]; then Message_Replication_Error="Replication Delayed by $Seconds_Behind_Master." mailalert else if [ "$Seconds_Behind_Master" = "NULL" ]; then Message_Replication_Error="$Server Replication error please check. The Seconds_Behind_Master state is NULL." mailalert fi fi

    Read the article

  • Install proprietary drivers 14.04 NVIDIA (steam segmentation issue)

    - by allthosemiles
    Recently, I finally got the official drivers for my NVIDIA 560 Ti card installed on Ubuntu 14.04 (hooray) However I started looking into installing Steam and I'm getting segmentation errors when I try to run the software. I tried installing 32-bit libs and it seemed like they weren't available or they were already installed. Upon further investigation, I found that a solution is to install the proprietary drivers, install steam then switch back to the other drivers. I'm not really sure what "proprietary drivers" are in all honesty. Has anyone gone through this process that could provide some insight here? (I installed the official 64-bit driver from the NVIDIA site for my 560 Ti just for reference. And the Ubuntu version installed is 64-bit as well) Update: This is the error text I get when trying to run steam after installing it via the ubuntu store. Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 3943 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete! Restarting Steam by request... Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME has been set by the user to: /home/dbrewer/.steam/ubuntu12_32/steam-runtime Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 4066 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" What I get when I run "steam --reset" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete!

    Read the article

  • DIY Carbonator Creates Pop Rocks Like Fizzy Fruit [Science]

    - by Jason Fitzpatrick
    If you’ve ever sat around wishing that scientists would stop wasting time trying to solve pressing global problems and instead genetically engineer a bizarre but delicious hybrid of Pop Rocks candy and wholesome fruit, this mad scientist experiment is for you. Over at Evil Mad Scientist Laboratories they share a really fun weekend project. Contributor Rich Faulhaber was looking for a way to make eating fruit extra fun and science-infused for his kids. His solution? Build a homemade carbon dioxide injector that infuses fruit with carbonation. Having trouble imagining that? Envision a bowl of strawberries where every strawberry burst into a crazy flurry of strawberry flavor and champagne bubbles every time you bit into it. Fizzy fruit! Hit up the link below to see how he took pretty common parts: a C02 tank from a paint ball gun, a water filter canister from the hardware store, and other cheap and readily available parts (with the exception of the gas regulator which he suggests you shop garage sales and surplus stores to find a deal on), and combined them together to create a C02 fruit infuser. Hit up the link below to read more about his setup and the procedure he uses to infuse fruit with carbonation. The C02inator [Evil Mad Scientist Laboratories via Hack a Day] HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • SL150 Modular Tape Library Demo Equipment Purchase Opportunity Limited Special Pricing on Demo Configuration

    - by Cinzia Mascanzoni
    Oracle is pleased to announce that, for a limited time, Oracle VADs may purchase special SL150 Modular Tape Library configurations for demonstration purposes at a significantly reduced price. Submit your order today for these special SL150 Modular Tape Library configurations and you can start showcasing these products in partner demonstrations and proof-of-concepts. VADs may also sell demo units to their VARs so that they may use them in their customer evaluations to help shorten the sales cycle. The offer also allows VARs to sell the demo configuration after a prescribed demonstration period to support the demo product’s cost of ownership. Why wait? Order today! Rules and Guidelines Only authorized VADs are allowed to purchase the special SL150 Modular Tape Library configurations. Purchase time frame is from now until February 28, 2013. Only the predetermined configurations are approved for purchase at the prescribed discounts. Supply is allocated per region and it’s limited. Order MUST be placed via the Oracle Partner Store* (OPS) where applicable. See below for online and offline order processes. If reselling to a VAR, VAD must include the Partner Demonstration Hardware Terms with the order (online via OPS or with offline VAD Ordering Document). Please mark your calendars for the SL150 Modular Tape Library Demo Program webcast on Sept 5th. The objective of this call is to share the details of this demo program with you. For details on how to connect to the webcast, contact your VAD Manager

    Read the article

  • Oracle Enterprise Content Management 11gR1 Patch Set 3 Released

    - by michelle.huff
    We're pleased to announce an updated patch set for Oracle Enterprise Content Management 11gR1 PS3 (11.1.1.4.0). Patch Set 3 (PS3) supports additional platforms and applications, and adds several new features to the products. Highlights include: Content Server (repository for UCM, URM & I/PM): New security capabilities, file store provider updates. Desktop Integration Suite: Windows 7 64-bit and Office 2010 (32 & 64-bit) support and new "Recent Content Items" menu. Universal Content Management (UCM): Site Studio Manager for Site Studio for External Applications, new template management options and ability to run Site Studio & Site Studio for External Applications 11g components on Content Server 10gR3. Imaging and Process Management (I/PM): Now certified with Oracle Business Process Management (BPM) 11g, Oracle Single Sign On (OSSO) 10g and Oracle Access Manager (OAM) 10g, export search results to Microsoft Excel. ECM Adapter for PeopleSoft: Support for UCM 11g Managed Attachments (support for 10g released earlier in 2010) and certification with PeopleTools 8.50. Information Rights Management (IRM): Desktop support for Microsoft Office 2010, Adobe Reader X and Microsoft SharePoint 2010. Customer Webcast We'll be covering this new release in our Quarterly Customer Update Webcast scheduled for this week, January 19/20, 2011. Register today. More Information Downloads now available on Oracle Technology Network (OTN) - it will be available via eDelivery soon. Read the updated ECM documentation for 11.1.1.4.0 Review the ECM 11.1.1.4.0 Upgrade & Patch Guides See the Release Notes

    Read the article

  • Our New Website Header (& Other Tweaks)

    - by justin.kestelyn
    Last week, the Oracle Technology Network Website went fixed-width. There are several reasons for this, most relating to providing a consistent user experience, easier management of Website content, etc. Furthermore, it's fairly standard for developer portals these days - java.sun.com, MSDN, and IBM DeveloperWorks are also all fixed-width sites. (My apologies to everyone who is unhappy about this change, but it really is an overall positive one.) Today, we have rolled out a brand-new header, the first step in what we call the "Mosaic" project - which is an effort to make the user experience across all Oracle Websites more consistent. To summarize the impact: The "pull-down" menus on the OTN site disappear; most of them move into a "flyout" button in the header. You can access the OTN flyout from any page on Oracle.com or the OTN site. Great for our page views. :) You also have direct access to the Downloads index from anywhere on Oracle.com. If you so desire, you can directly access product overviews, Oracle University and Support info, Oracle Store, etc etc from the OTN site now. Due to limited space in the flyout we cannot accommodate *all* the pull-down items, but they are all no more than 1 or 2 clicks away. This approach has been validated in extensive user testing over the last few months; I welcome your feedback now in comments. There are many other changes in train, with the next one being: A major homepage redesign, the first in 4 or 5 years.

    Read the article

  • Different methods of ammo resupply

    - by Chris Mantle
    I'm writing a small game at the moment. Presently, I have one or two design elements that aren't locked down yet, and I wanted to ask for input on one of these. For dramatic effect, the player's character in my game is immobilised, alone and has a supposedly limited amount of ammo for their weapons. However, I would like to periodically resupply the player with ammo (for the purpose of balancing the level of difficulty and to allow the player to continue if they're doing well). I'm trying to think of a method of resupply that's different to the more familiar strategies of making ammo magically appear or having the antagonists drop some when they die. I'd like to emphasise the notion of the player's isolation as much as possible, and finding a way of 'sneaking' ammo to the player without removing too much of that emphasis is basically what I'm trying to think of (it's definitely a valid argument that resupplying the player removes it anyway) I have considered a sort of simple in-game 'store', where kills get you points that you can spend on ammo for your favourite weapon. This might work well, and may also be good for supporting a simple micro-transaction business model within the game. However, you'd have to pause the game often to make purchases, which would interrupt the action, and it works against the notion of isolation. Any thoughts?

    Read the article

  • KERPOOOOW!

    - by Matt Christian
    Recently I discovered the colorful world of comic books.  In the past I've read comics a few times but never really got into them.  When I wanted to start a collection I decided either video games or comics yet stayed away from comics because I am less familiar with them. In any case, I stopped by my local comic shop and picked up a few comics and a few trade paperbacks.  After reading them and understanding their basic flow I began to enjoy not only the stories but the art styles hiding behind those little white bubbles of text (well, they're USUALLY white).  My first stop at the comic store I ended up with: - Nemesis #1 (cover A) - Shuddertown #1 (cover A I think) - Daredevil: King of Hell's Kitchen Trade Paperback - Peter Parker: Spiderman - One Small Break Trade Paperback It took me about 3-4 days to read all of that including re-reading the single issues and glancing over the beginning of Daredevil again.  After a week of looking around online I knew a little more about the comics I wanted to pick up and the kind of art style I enjoyed.  While Peter Parker: Spiderman was ok, I really enjoyed the detailed, realistic look of Daredevil and Shuddertown. Now, a few years back I picked up the game The Darkness for PS3.  I knew it was based off a comic but never read the comic.  I decided I'd pick up a few issues of it and ended up with: - The Darkness #80 (cover A) - The Darkness #81 (cover A) - The Darkness #82 (cover A) - The Darkness #83 (cover A) - The Darkness Shadows and Flame #1  (one-shot; cover A) - The Darkness Origins: Volume 1 Trade Paperback (contains The Darkness #1-6) - New Age boards and bags for storing my comics The Darkness is relatively good though jumping from issue #6 to issue #80 I lost a bit on who the enemy in the current series is.  I think out of all of them, issue #83 was my favorite of them. I'm signed up at the local shop to continue getting Nemesis, The Darkness, and Shuddertown, and I'll probably pick up a few different ones this weekend...

    Read the article

  • Tessellating to a curve?

    - by Avi
    I'm creating a game engine, and I'm trying to define a 3D model format I want to use. I haven't come across a format that quite does what I want. My game engine assumes a shader model 5+ environment. By the time I'm finished with it, that won't be a very unreasonable requirement. Because it assumes such a modern environment, I'm going to try and exploit tessellation. The most popular way, it seems, to procedurally increase geometry through tessellation is to tessellate to a height map. This works for a lot of things, but has limitations in that height maps still use up VRAM and also only have finite scalability. So I want to be able to use curves to define what a mesh should tessellate to. The thing is, I have no idea what definition of curves I should use, how I should store it, and how I should tessellate to it. Do I use NURBS curves? Bezier? Hermite? And once I figure that out, is there an algorithm to determine how the tessellation shader should produce and move vertices to match the curve as closely as possible? Is the infinite scalability and lower memory usage when compared to height maps worth the added computational complexity? I'm sorry I'm kind if ignorant as to these matters. I just don't know where to start.

    Read the article

  • Big AdventureWorks2012

    - by jamiet
    Last week I launched AdventureWorks on Azure, an initiative to make SQL Azure accessible to anyone, in my blog post AdventureWorks2012 now available for all on SQL Azure. Since then I think its fair to say that the reaction has been lukewarm with 31 insertions into the [dbo].[SqlFamily] table and only 8 donations via PayPal to support it; on the other hand those 8 donators have been incredibly generous and we nearly have enough in the bank to cover a full year’s worth of availability. It was always my intention to try and make this offering more appealing and to that end I have used an adapted version of Adam Machanic’s make_big_adventure.sql script to massively increase the amount of data in the database and give the community more scope to really push SQL Azure and see what it is capable of. There are now two new tables in the database: [dbo].[bigProduct] with 25200 rows [dbo].[bigTransactionHistory] with 7827579 rows The credentials to login and use AdventureWorks on Azure are as they were before: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Remember, if you want to support AdventureWorks on Azure simply click here to launch a pre-populated PayPal Send Money form - all you have to do is login, fill in an amount, and click Send. We need more donations to keep this up and running so if you think this is useful and worth supporting, please please donate.   I mentioned that I had to adapt Adam’s script, the main reasons being: Cross-database queries are not yet supported in SQL Azure so I had to create a local copy of [dbo].[spt_values] rather than reference that in [master] SELECT…INTO is not supported in SQL Azure The 1GB limit of SQLAzure web edition meant that there would not be enough space to store all the data generated by Adam’s script so I had to decrease the total number of rows. The amended script is available on my SkyDrive at https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16756&parid=550F681DAD532637!16755 @Jamiet

    Read the article

  • Oracle Enterprise Data Quality Adds Global Address Verification Capabilities for Greater Accuracy and Broader Location Coverage

    - by Mala Narasimharajan
    Data quality – has many flavors to it.  Product, Customer – you name the data domain and there’s data quality associated with it.  Address verification and data quality are a little different.  in that there is a tremendous amount of variation as well as nuance attached to it.  Specifically, what makes address verification challenging is that more often than not, addresses are incomplete, riddled with misspellings, incorrect postal codes are assigned to locations or non-address items are present.  Almost all data has locations, and accurate locations power a wealth of business processes: Customer Relationship Management, data quality, delivery of materials, goods or services, fraud detection, insurance risk assessment, data analytics, store and territory planning, and much more. Oracle Address Verification Server provides location-based services as well as deeper parsing and analysis capabilities for Oracle Enterprise Data Quality.  Specifically, Pre-integrated with the EDQ platform, Oracle Address Verification Server provides robust parsing, validation, as well as specialized location information for over 240 countries – all populated countries on Earth.  Oracle Enterprise Data Quality (EDQ) is a data quality platform, dedicated to address the distinct challenges of customer and product data quality, and performs advanced data profiling to identify and measure poor quality data and identify rule requirements, as well as semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured.   EDQ is integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM.  Address Verification Server provides key address verification services for Oracle CRM and Oracle Customer Hub.  In addition, Address Verification Server provides greater accuracy when handling address data due to its expanded sources and extensible knowledge repository, solid parsing across locales and countries as well as  adept handling of extraneous data in address fields.  For more information on Oracle Address Verification Server visit:  http://bit.ly/GMUE4H and http://bit.ly/GWf7U6

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >