Search Results

Search found 8266 results on 331 pages for 'distributed systems'.

Page 216/331 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • FY11 plans &ndash; how can you increase your SOA business?

    - by Jürgen Kress
    Thanks for a fantastic FY10 was great to work with all of you! Yes with the economic crises the fiscal year was hard. SOA and Oracle Fusion Middleware do address this challenges and can help companies to save cost to integrate their systems, automate and change their processes. More when we publish our fiscal year results. What is on the agenda for FY11? Specialization: It is key that you become SOA & Application Grid Specialized. We will focus our activities and budgets on partners with Specialization! Sales campaigns: To support you in our joint business we will continue to run joint sales campaigns. With OFM 11g there is a great opportunity to generate service revenue to migrate and to consolidate on the platform. It is key that you do register your opportunities within the Open Market Model (OMM) to ensure sales alignment. Enablement. With the release of many new products and versions training is key. We will continue to offer training dedicated to your role: sales, pre-sales and implementation. Make sure that you check local partner training calendars and sign up for the next bootcamps Thanks for your support! Jürgen Kress

    Read the article

  • Multiple sites with the same codebase in Python

    - by Jimmy
    I am trying to run a large amount of sites which share about 90% of their code. They are simply designed to query an API and return the results. They will have a common userbase / database but will be configured slightly different and will have different CSS (perhaps even different templating). My initial idea was to run them as separate applications with a common library but I have read about the sites framework which would allow them to run from a single instance of Django which may help to reduce memory usage. https://docs.djangoproject.com/en/dev/ref/contrib/sites/ Is the site framework the right approach to a problem like this, and does it have real benefits over running separate applications? Initially I thought it was, but now I think otherwise. I have heard the following: Your SITE_ID is set in settings.py, so in order to have multiple sites, you need multiple settings.py configurations, which means multiple distinct processes/instances. You can of course share the code base between them, but each site will need a dedicated worker / WSGIDaemon to serve the site. This effectively removes any benefit of running multiple sites under one hood, if each site needs a UWSGI instance running. Alternative ideas of systems: https://github.com/iivvoo/django_layers https://github.com/shestera/django-multisite I don't know what route to be taking with this.

    Read the article

  • What to Return with Async CRUD methods

    - by RualStorge
    While there is a similar question focused on Java, I've been in debates with utilizing Task objects. What's the best way to handle returns on CRUD methods (and similar)? Common returns we've seen over the years are: Void (no return unless there is an exception) Boolean (True on Success, False on Failure, exception on unhandled failure) Int or GUID (Return the newly created objects Id, 0 or null on failure, exception on unhandled failure) The updated Object (exception on failure) Result Object (Object that houses the manipulated object's ID, Boolean or status field to with success or failure indicated, Exception information if there was one, etc) The concern comes into play as we've started moving over to utilizing C# 5's Async functionality, and this brought the question up of how we should handle CRUD returns large-scale. In our systems we have a little of everything in regards to what we return, we want to make these returns standardized... Now the question is what is the recommended standard? Is there even a recommended standard yet? (I realize we need to decide our standard, but typically we do so by looking at best practices, see if it makes sense for us and go from there, but here we're not finding much to work with)

    Read the article

  • Raspberry Pi and Java SE: A Platform for the Masses

    - by Jim Connors
    One of the more exciting developments in the embedded systems world has been the announcement and availability of the Raspberry Pi, a very capable computer that is no bigger than a credit card.  At $35 US, initial demand for the device was so significant, that very long back orders quickly ensued. After months of patiently waiting, mine finally arrived.  Those initial growing pains appear to have been fixed, so availability now should be much more reasonable. At a very high level, here are some of the important specs: Broadcom BCM2835 System on a chip (SoC) ARM1176JZFS, with floating point, running at 700MHz Videocore 4 GPU capable of BluRay quality playback 256Mb RAM 2 USB ports and Ethernet Boots from SD card Linux distributions (e.g. Debian) available So what's taking place taking place with respect to the Java platform and Raspberry Pi? A Java SE Embedded binary suitable for the Raspberry Pi is available for download (Arm v6/7) here.  Note, this is based on the armel architecture, a variety of Arm designed to support floating point through a compatibility library that operates on more platforms, but can hamper performance.  In order to use this Java SE binary, select the available Debian distribution for your Raspberry Pi. The more recent Raspbian distribution is based on the armhf (hard float) architecture, which provides for more efficient hardware-based floating point operations.  However armhf is not binary compatible with armel.  As of the writing of this blog, Java SE Embedded binaries are not yet publicly available for the armhf-based Raspbian distro, but as mentioned in Henrik Stahl's blog, an armhf release is in the works. As demonstrated at the just-completed JavaOne 2012 San Francisco event, the graphics processing unit inside the Raspberry Pi is very capable indeed, and makes for an excellent candidate for JavaFX.  As such, plans also call for a Pi-optimized version of JavaFX in a future release too. A thriving community around the Raspberry Pi has developed at light speed, and as evidenced by the packed attendance at Pi-specific sessions at Java One 2012, the interest in Java for this platform is following suit. So stay tuned for more developments...

    Read the article

  • Opportunities in Development in our Swedish office

    - by anca.rosu
    Hi everyone, my name is Henrik and I joined the JRockit group in 2004. Before that my background was Microsoft, as both a Test Competence lead and as a Program Manager. As an Engineering Manager at Oracle I lead a team of 11 developers. I focus on people management and the daily operations of the department with a heavy focus on interaction and dependencies between the groups and departments here at the Stockholm development site. I also make sure my team deliver on our commitments. I would like to give you a brief summary of the Oracle JRockit team: -The development group in Stockholm delivers several products for the Oracle Fusion Middleware stack. Our main products are JRockitVE which allows you to run a Java Virtual Machine without an operating system, the JRockit Java Virtual Machine which is the default jvm for all Oracle middleware products, and the JRockit MissionControl, a set of tools that allows developers to monitor their applications at runtime and perform advanced latency analysis as well as in-production memory leak detection etc. -The office has several departments focusing on different aspects of the product development process, not only to build features and test them but everything from building the infrastructure needed to automatically build and test the products to sustaining engineering that tracks down bugs in customer systems and provide them with patches. Some inspirational lines around what the Oracle JRockit group can offer you in terms of progress, development and learning: - It is a unique chance to get insight and experience building enterprise class software for one of the worlds largest software companies. Here there are almost unlimited possibilities for the right candidate to learn about silicon features and how to implement support for this in software, and to compile optimizations. The position will also give insight into the processes needed to produce software at this level in the industry. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com. Technorati Tags: Development,Sweden,Jrockit,Java,Virtual Machine,Oracle Fusion Middleware,software

    Read the article

  • Leveraging .Net 4.0 Framework Tools For Encrypting Web Configuration Sections

    - by Sam Abraham
    I would like to share a few points with regards to encrypting web configuration sections in .Net 4.0. This information is also applicable to .Net 3.5 and 2.0. Two methods can work perfectly for encrypting connection strings in a Web project configuration file:   1-Do It All Yourself! In this approach, helper functions for encrypting/decrypting configuration file content are implemented. Program would explicitly retrieve appropriate content from configuration file then decrypt it appropriately.  Disadvantages of this implementation would be the added overhead for maintaining the encryption/decryption code as well the burden of always ensuring sections are appropriately decrypted before use and encrypted appropriately whenever edited.   2- Leverage the .Net 4.0 Framework (The Way to go!) Fortunately, all needed tools for protecting configuration files are built-in to the .Net 2.0/3.5/4.0 versions with very little setup needed. To encrypt connection strings, one can use the ASP.Net IIS Registration Tool (Aspnet_regiis.exe). Note that a 64-bit version of the tool also exists under the Framework64 folder for 64-bit systems. The command we need to encrypt our web.config file connection strings is simply the following:   Aspnet_regiis –pe “connectionstrings” –app “/sampleApplication” –prov “RsaProtectedConfigurationProvider”   To later decrypt this configuration section:   Aspnet_regiis –pd “connectionstrings” –app “/SampleApplication”   The following is a brief description of the command line options used in the example above. Aspnet_regiis supports many more options which you can read about in the links provided for reference below.   Option Description -pe  Section name to encrypt -pd  Section name to decrypt -app  Web application name -prov  Encryption/Decryption provider   ASP.Net automatically decrypts the content of the Web.Config file at runtime so no programming changes are needed.   Another tool, aspnet_setreg.exe is to be used if certain configuration file sections pertinent to the .Net runtime are to be encrypted. For more information on when and how to use aspnet_setreg, please refer to the references below.   Hope this helps!   Some great references concerning the topic:   http://msdn.microsoft.com/en-us/library/ff650037.aspx http://msdn.microsoft.com/en-us/library/zhhddkxy.aspx http://msdn.microsoft.com/en-us/library/dtkwfdky.aspx http://msdn.microsoft.com/en-us/library/68ze1hb2.aspx

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by [email protected]
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage. 2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.." 3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what. Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.

    Read the article

  • PHP file_put_contents File Locking

    - by hozza
    The Senario: You have a file with a string (average sentence worth) on each line. For arguments sake lets say this file is 1Mb in size (thousands of lines). You have a script that reads the file, changes some of the strings within the document (not just appending but also removing and modifying some lines) and then overwrites all the data with the new data. The Questions: Does 'the server' PHP, OS or httpd etc. already have systems in place to stop issues like this (reading/writing half way through a write)? i. If it does, please explain how it works and give examples or links to relevant documentation. ii. If not, are there things I can enable or set-up, such as locking a file until a write is completed and making all other reads and/or writes fail until the previous script has finished writing? My Assumptions and Other Information: The server in question is running PHP and Apache or Lighttpd. If the script is called by one user and is halfway through writing to the file and another user reads the file at that exact moment. The user who reads it will not get the full document, as it hasn't been written yet. (If this assumption is wrong please correct me) I'm only concerned with PHP writing and reading to a text file, and in particular, the functions "fopen"/"fwrite" and mainly "file_put_contents". I have looked at the "file_put_contents" documentation but have not found the level of detail or a good explanation of what the "LOCK_EX" flag is or does. The senario is an EXAMPLE of a worst case senario where I would assume these issues are more likely to occur, due to the large size of the file and the way the data is edited. I want to learn more about these issues and don't want or need answers or comments such as "use mysql" or "why are you doing that" because I'm not doing that, I just want to learn about file read/writing with PHP and don't seem to be looking in the right places/documentation and yes I understand PHP is not the perfect language for working with files in this way...

    Read the article

  • MAAS/JuJu Clarifications

    - by ectoskeleton
    I really love the concept of MAAS underlying an OpenStack implementation, but there are a few questions about MAAS that I am not entirely clear on. Should all hosts be set to network boot at all times or after they have been registered and allocated as a service, should they boot to disk? After juju bootstrap is executed, I turn on the machine that has been allocated (note WoL isn't working, I suspect it's being blocked on the network), the machine boot's up and then juju status executes correct, agent running and all that good stuff. If I 'reboot' the machine (testing power failure/problem whatever), juju status comes back but the agent-state is no longer in running state, and so far I have to destroy the environment and restart. In all cases I have never been able to deploy any services to any of the other nodes. I deploy the service with juju, note which node it was assigned, and then start the system. The system just boots up into a basic node. If I SSH to it I have to enter password, so it's not setting up the ssh key or anything. This is on Ubuntu 12.04.1 LTS systems and HP GL360G7 hosts. The MAAS management server is running as a VM but all on the same network. At this point I am not sure if I am doing something wrong or if there is a problem somewhere else. Is the idea that anytime a host is rebooted it should be rebuilt from the ground up, or is something else going on behind the scene to tell it to boot the local image. If the latter, why doesn't the agent start on a system that has been successfully setup before (juju bootstrapped system)?

    Read the article

  • How can I get run Ubuntu Desktop on my Galaxy Nexus?

    - by Jack Senechal
    On the Ubuntu for phones site it advertises the desktop view feature: "The phone becomes a full PC and thin client when docked.". And there's the demo by Canonical of something similar running under Ubuntu for Android. I realize they're different systems, but the end effect is in both is to have a full Ubuntu system running on the phone. I've installed Ubuntu Touch Preview on my Galaxy Nexus (toro), and it's working as expected (no cellular signal, but wifi works, etc). But when I plug in a monitor via HDMI it just mirrors the phone's touch display. There's also currently no bluetooth support for attaching keyboard and mouse. Keyboard only kind of works via USB, and mouse not at all. I've also tried running Ubuntu under Android via VNC, but the lack of responsiveness of VNC makes it impractical for daily use. I'd consider that route again if there is some way to make the UI more responsive. So the question is, how can set up my phone to run Ubuntu Desktop in a way that's useable as a laptop replacement? Is there a way to enable Desktop View on Ubuntu Touch? Or can I run Ubuntu for Android as in the previously referenced demo? Plugging into a monitor would be OK, but I'd love to be able to use the desktop interface with mouse and keyboard through the phone's screen as well. Touch input and an onscreen keyboard would be a plus but is definitely not necessary.

    Read the article

  • Nvidia 740M still not working after Bumblebee installation

    - by Jon
    first of all, I have checked a lot of similar topics, but I still can't get my laptop to use Nvidia 740M. So first things first. I have a laptop Asus X550V(i5-3230, 4gb RAM, Nvidia 740M + Intel HD4000). I installed Ubuntu 13.10 alongside Win8(preinstalled) and both systems are running without problems. However, I have problem with second graphic card(Nvidia 740M), as Ubuntu doesn't recognize it. I installed bumblebee with this tutorial https://wiki.ubuntu.com/Bumblebee#Installation, but I still get error "Cannot access secondary GPU" error when trying to run ''optirun Steam'' in terminal. Then I tried to do this: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) No devices detected. you need to edit the /etc/bumblebee/xorg.conf.nvidia (or /etc/bumblebee/xorg.conf.nouveau if using the noveau driver) and specify the correct BusID by following the instructions therein. But with lspci / VGA i get only info about Intel 4000, but no Nvidia. When I type only lspci, I get the line for Nvidia 740M,but after I edit the config file I still get second card error. Also, in /etc/bumblebee/xorg.conf.nvidia there wasn't BusID or anything similar, so I just added the whole line in device section. As I sad, I tried a lot of things to get it working, avoiding this forum(as I din't want to bother people with some solutions possible), but alas!, I had to bother you. If there is a need for some additional info, just say, no problem at all. Thank you very much in advance. :)

    Read the article

  • To encryption=on or encryption=off a simple ZFS Crypto demo

    - by darrenm
    I've just been asked twice this week how I would demonstrate ZFS encryption really is encrypting the data on disk.  It needs to be really simple and the target isn't forensics or cryptanalysis just a quick demo to show the before and after. I usually do this small demo using a pool based on files so I can run strings(1) on the "disks" that make up the pool. The demo will work with real disks too but it will take a lot longer (how much longer depends on the size of your disks).  The file hamlet.txt is this one from gutenberg.org # mkfile 64m /tmp/pool1_file # zpool create clear_pool /tmp/pool1_file # cp hamlet.txt /clear_pool # grep -i hamlet /clear_pool/hamlet.txt | wc -l Note the number of times hamlet appears # zpool export clear_pool # strings /tmp/pool1_file | grep -i hamlet | wc -l Note the number of times hamlet appears on disk - it is 2 more because the file is called hamlet.txt and file names are in the clear as well and we keep at least two copies of metadata. Now lets encrypt the file systems in the pool. Note you MUST use a new pool file don't reuse the one from above. # mkfile 64m /tmp/pool2_file # zpool create -O encryption=on enc_pool /tmp/pool2_file Enter passphrase for 'enc_pool': Enter again: # cp hamlet.txt /enc_pool # grep -i hamlet /enc_pool/hamlet.txt | wc -l Note the number of times hamlet appears is the same as before # zpool export enc_pool # strings /tmp/pool2_file | grep -i hamlet | wc -l Note the word hamlet doesn't appear at all! As a said above this isn't indended as "proof" that ZFS does encryption properly just as a quick to do demo.

    Read the article

  • An adequate message authentication code for REST

    - by Andras Zoltan
    My REST service currently uses SCRAM authentication to issue tokens for callers and users. We have the ability to revoke caller privileges and ban IPs, as well as impose quotas to any type of request. One thing that I haven't implemented, however, is MAC for requests. As I've thought about it more, for some requests I think this is needed, because otherwise tokens can be stolen and before we identify this and deactivate the associated caller account, some damage could be done to our user accounts. In many systems the MAC is generated from the body or query string of the request, however this is difficult to implement as I'm using the ASP.Net Web API and don't want to read the body twice. Equally importantly I want to keep it simple for callers to access the service. So what I'm thinking is to have a MAC calculated on: the url, possibly minus query string the verb the request ip (potentially is a barrier on some mobile devices though) utc date and time when the client issues the request. For the last one I would have the client send that string in a request header, of course - and I can use it to decide whether the request is 'fresh' enough. My thinking is that whilst this doesn't prevent message body tampering it does prevent using a model request to use as a template for different requests later on by a malicious third party. I believe only the most aggressive man in the middle attack would be able to subvert this, and I don't think our services offer any information or ability that is valuable enough to warrant that. The services will use SSL as well, for sensitive stuff. And if I do this, then I'll be using HMAC-SHA-256 and issuing private keys for HMAC appropriately. Does this sound enough? Have I missed anything? I don't think I'm a beginner when it comes to security, but when working on it I always. am shrouded in doubt, so I appreciate having this community to call upon!

    Read the article

  • Commercial product using a GPL OS

    - by pfried
    we are planning to create a commercial product. The product consists of come MCUs and a small computer (we are developping on a raspberry pi at the moment). The computer needs an operating system as we would like keep things like WLAN and booting as simple as possible. We create some software running on this computer (node.js application). The most operating systems like Arch Linux are licenced under the GPL. The product we would sell contains the computer with preinstalled OS and software. This system operates as a central access point to MCU devices and is able to control them. We use other's software in our product. We do not modify their source code. The product (the computer part) consists of a computer, an OS and software we create. How does the use of an OS affect our own code (licence)? Is there a possibility of avoiding GPL for our own code? eg. shipping the software seperated? Are there any effects to other components of our product, eg. the MCU part? The node.js application delivers a WebApp to the client where it is executed. Are there any effects (As we would like to sell parts of the code as an additional App on the App Stores)? I know we make use of the work of the community and i respect this. The problem is: The software alone is kind of useless without the MCU devices. I do not expect a legal advice.

    Read the article

  • Revolutionizing Digital Commerce

    - by bwalstra
    The confluence of the Internet, the pace of change in technology, and the demands of the value-conscious consumer are accelerating the evolution of the global digital marketplace at an unprecedented rate. Success in the new digital economy has become inextricably linked with the agility to launch innovative products, services, and new business models efficiently with minimal risk. A major obstacle to agility, and by extension to success in digital commerce, is the fact that by and large information technology (IT) infrastructure is tightly coupled with particular business models. Enterprises, through well intentioned but misconstrued costsaving belief, continue to customize existing infrastructure and create now silos to support new business models. In reality, this approach results in rigid, inflexible business processes and exposes the enterprise to unnecessary risks, higher opportunity costs, and lower profit margins. Oracle, a leading supplier of business solutions to the enterprise, is enabling the business strategies necessary to succeed in the digital economy by offering a modern, open, modular, and functionally comprehensive revenue management solution that decouples IT infrastructure from business models. Enterprises using the Oracle solution are able to focus on core competencies and innovate unimpeded, assuring that business and IT systems will seamlessly adapt to changing conditions of the digital economy. Revolutionizing Digital Commerce:  An Oracle Revenue Management Solution

    Read the article

  • Ubuntu doesn't boot after adding a bootflag to the Windows partition

    - by Nils
    I have Ubuntu 10.10 installed on one (physical) hd and on the other one Windows. On both drives grub is installed to boot both operating systems. When I wanted to install SP1 for Win 7 I had to add a bootable flag to the partition from which Windows boots, otherwise the installation of SP1 does not work. I did so by booting into Ubuntu and using gparted to add this flag. After doing so the update for SP1 worked without any problems. When trying to boot back into Ubuntu grub complained that it couldn't find the kernel anymore! I tried to boot into a Ubuntu minimal cd and to restore grub using chroot, update-grub and grub-install which didn't work. I still had the problem that it was Unable to boot Ubuntu putting me in some minimal system called initramfs. It seems however that the uuid of the partitions changed. I guess this happened when I added the bootflag to the windows disk. Next thing I tried was to tell grub not to use the uuid for loading the kernel by uncommenting something in /etc/default/grub. Then I got the kernel booting but it suddenly stops (I guess when it is trying to mount the root file system) saying that the concerning uuid does not exist putting me into initramfs again. The strange thing is that there I coulnd't even manage to mount the root partition using /dev/sdb1 (on which it is in my case). I would be glad if there is a way to restore the system again without having to reinstall it.

    Read the article

  • Be There: Tinkerforge/NetBeans Platform Integration Course

    - by Geertjan
    Tinkerforge is an electronic construction kit. It exposes a number of API bindings, including, of course, Java. The nice thing also is that Tinkerforge products are open source, both on the hardware and software levels, so that you can take their bases as a starting point for your own modifications. "The TinkerForge system is a set of pre-built electronics boards that are built in such a way that you can stack the boards (known as bricks), attach accessories (known as bricklets), and have your prototype and and running quickly. Unlike systems, such as the Arduino or Launchpad, the TinkerForge has to be attached to a computer and the computer does all of the work. With an easy set of application programming interfaces (APIs) available in C/C++, C#, Java, PHP, and Ruby, the system is easy to interface and program over USB in a snap." (from this useful article) Henning Krüp, who has arranged several NetBeans Platform Certified Training Courses in the past, in the Nordhorn/Lingen area in Germany, had the inspired idea to focus the next course on integration with Tinkerforge. In other words, the whole course will be focused on creating a standalone Java desktop application that leverages the NetBeans Platform to interact with Tinkerforge! Interested in joining the course or setting up something similar yourself? The course organized by Henning will be held from 19 to 21 September, as explained here, together with contact details.  If you'd like to organize a similar course at a location of your choosing, leave a comment at the end of this blog entry and we'll set something up together!

    Read the article

  • PASS Business Intelligence Virtual Chapter Upcoming Sessions (November 2013)

    - by Sergio Govoni
    Let me point out the upcoming live events, dedicated to Business Intelligence with SQL Server, that PASS Business Intelligence Virtual Chapter has scheduled for November 2013. The "Accidental Business Intelligence Project Manager"Date: Thursday 7th November - 8:00 PM GMT / 3:00 PM EST / Noon PSTSpeaker: Jen StirrupURL: https://attendee.gotowebinar.com/register/5018337449405969666 You've watched the Apprentice with Donald Trump and Lord Alan Sugar. You know that the Project Manager is usually the one gets firedYou've heard that Business Intelligence projects are prone to failureYou know that a quick Bing search for "why do Business Intelligence projects fail?" produces a search result of 25 million hits!Despite all this… you're now Business Intelligence Project Manager – now what do you do?In this session, Jen will provide a "sparks from the anvil" series of steps and working practices in Business Intelligence Project Management. What about waterfall vs agile? What is a Gantt chart anyway? Is Microsoft Project your friend or a problematic aspect of being a BI PM? Jen will give you some ideas and insights that will help you set your BI project right: assess priorities, avoid conflict, empower the BI team and generally deliver the Business Intelligence project successfully! Dimensional Modelling Design Patterns: Beyond BasicsDate: Tuesday 12th November - Noon AEDT / 1:00 AM GMT / Monday 11th November 5:00 PM PSTSpeaker: Jason Horner, Josh Fennessy and friendsURL: https://attendee.gotowebinar.com/register/852881628115426561 This session will provide a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. This session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries. Data Vault Data Warehouse ArchitectureDate: Tuesday 19th November - 4:00 PM PST / 7 PM EST / Wednesday 20th November 11:00 PM AEDTSpeaker: Jeff Renz and Leslie WeedURL: https://attendee.gotowebinar.com/register/1571569707028142849 Data vault is a compelling architecture for an enterprise data warehouse using SQL Server 2012. A well designed data vault data warehouse facilitates fast, efficient and maintainable data integration across business systems. In this session Leslie and I will review the basics about enterprise data warehouse design, introduce you to the data vault architecture and discuss how you can leverage new features of SQL Server 2012 help make your data warehouse solution provide maximum value to your users. 

    Read the article

  • In an Entity/Component system, can component data be implemented as a simple array of key-value pairs? [on hold]

    - by 010110110101
    I'm trying to wrap my head around how to organize components in an Entity Component Systems once everything in the current scene/level is loaded in memory. (I'm a hobbyist BTW) Some people seem to implement the Entity as an object that contains a list of of "Component" objects. Components contain data organized as an array of key-value pairs. Where the value is serialized "somehow". (pseudocode is loosely in C# for brevity) class Entity { Guid _id; List<Component> _components; } class Component { List<ComponentAttributeValue> _attributes; } class ComponentAttributeValue { string AttributeName; object AttributeValue; } Others describe Components as an in-memory "table". An entity acquires the component by having its key placed in a table. The attributes of the component-entity instance are like the columns in a table class Renderable_Component { List<RenderableComponentAttributeValue> _entities; } class RenderableComponentAttributeValue { Guid entityId; matrix4 transformation; // other stuff for rendering // everything is strongly typed } Others describe this actually as a table. (and such tables sound like an EAV database schema BTW) (and the value is serialized "somehow") Render_Component_Table ---------------- Entity Id Attribute Name Attribute Value and when brought into running code: class Entity { Guid _id; Dictionary<string, object> _attributes; } My specific question is: Given various components, (Renderable, Positionable, Explodeable, Hideable, etc) and given that each component has an attribute with a particular name, (TRANSLATION_MATRIX, PARTICLE_EMISSION_VELOCITY, CAN_HIDE, FAVORITE_COLOR, etc) should: an entity contain a list of components where each component, in turn, has their own array of named attributes with values serialized somehow or should components exist as in-memory tables of entity references and associated with each "row" there are "columns" representing the attribute with values that are specific to each entity instance and are strongly typed or all attributes be stored in an entity as a singular array of named attributes with values serialized somehow (could have name collisions) or something else???

    Read the article

  • Studentworker - being a superhero?

    - by Niklas H
    A couple of weeks ago I got a new job as a studentworker for a webagency. The job is 15-20 hours of week. Even though I am new in the company, I feel right at home and I enjoy working with my co-workers. To start of with, I was assigned to work on an internal tool in the company, in order to learn their systems and their development platform. The deadline for this project is this week, and I am right on time. But today (wednesday at noon) I received an email from my boss, asking me to do a new project that has a deadline at friday morning. The new assignment alone will be hard to finish on time, and on top of that I need to finish the other assignment on time. My question is: How do I handle my boss expecting me to be a superhero? EDIT: I will talk to my boss about delaying one of the projects. But another problem is that the new assignment will be hard to do on time (friday morning). I didn't have a say on the deadline - I just got a mail telling me the deadline. I am new in the company and want to stay, but I don't want to start off on the wrong foot with the boss.

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Bahnbrechend und einsatzbereit: Oracle 12c In-Memory-Option Launch in Frankfurt

    - by Anne Manke
    Seit der Ankündigung der Oracle 12c In-Memory-Databankoption in San Francisco auf der Openworld im letzten Jahr, ist die DB Community gespannt, was diese bahnbrechende Technologie für Ad-hoc-Echtzeitanalysen von Live-Transaktionen, Data Warehousing, Reporting und Online Transaction Processing (OLTP) bringen wird. Die Messlatte liegt hoch, denn Larry Ellison verspricht mit der neuen 12c In-Memory-Option eine 100-fach schnellerer Verarbeitung von Abfragen bei Echtzeitanalysen für OLTP Prozesse oder Datawarehouses eine Verdoppelung der Transaktionsverarbeitung eine 100%ige Kompatibilität zu bestehenden Anwendungen Daten werden im Zeilenformat und Spaltenformat (In-Memory) abgelegt, und sind dabei aktiv und konsitstent Cloud-ready ohne Datamigration eine Ausweitung der In-Memory-basierten Abfrageprozesse auf mehrere Server    Um nur einige Features zu nennen >> mehr Infos finden Sie hier! Abfragen werden mit der neuen 12c In-Memory-Datenbankoption schneller bearbeitet, als die Anfrage gestellt werden kann, so Larry Ellison. Am 17. Juni 2014 wird die 12c In-Memory auf einer exklusiven Launch-Veranstaltung in Frankfurt am Main vorgestellt. Auf der Agenda stehen Vorträge, Diskussionen und eine LiveDemo der In-Memory-Datenbankoption.  Melden Sie sich jetzt an! Ort & Zeit: 17. Juni 2014, 9:30 - 15:15 Uhr in Radisson Blu Hotel (Franklinstrasse 65, 60486 Frankfurt am Main) Agenda 9:30 Registrierung 10:00 Begrüßung Guenther Stuerner, Vice President Sales Consulting, Oracle Deutschland (in deutscher Sprache) 10:15 Analystenvortrag Carl W. Olofson, Research Vice President, IDC (in englischer Sprache) 10:35 Keynote Andy Mendelsohn, Head of Database Development, Oracle (in englischer Sprache) 11:35 Podiumsdiskussion (in englischer Sprache): · Jens-Christian Pokolm, Postbank Systems AG · Andy Mendelsohn, Head of Database Development, Oracle · Carl W. Olofson, Research Vice President, IDC · Dr. Dietmar Neugebauer, Vorstandsvorsitzender, DOAG 12:30 Mittagessen 13:45 Oracle Database In Memory Option    Perform – Manage – Live Demo Ralf Durben, Senior Leitender Systemberater, Oracle Deutschland (in deutscher Sprache) 14:30 In Memory – Revolution for your DWH – Real Time Datawarehouse – Mixed Workloads – Live Demo – Live Data Query Alfred Schlaucher, Senior Leitender Systemberater, Oracle Deutschland (in deutscher Sprache) 15:15 Schlusswort & Networking

    Read the article

  • Fixed-Function vs Shaders: Which for beginner?

    - by Rob Hays
    I'm currently going to college for computer science. Although I do plan on utilizing an existing engine at some point to create a small game, my aim right now is towards learning the fundamentals: namely, 3D programming. I've already done some research regarding the choice between DirectX and OpenGL, and the general sentiment that came out of that was that whether you choose OpenGL or DirectX as your training-wheels platform, a lot of the knowledge is transferrable to the other platform. Therefore, since OpenGL is supported by more systems (probably a silly reason to choose what to learn), I decided that I'm going to learn OpenGL first. After I made this decision to learn OpenGL, I did some more research and found out about a dichotomy that I was somewhere unaware of all this time: fixed-function OpenGL vs. modern programmable shader-based OpenGL. At first, I thought it was an obvious choice that I should choose to learn shader-based OpenGL since that's what's most commonly used in the industry today. However, I then stumbled upon the very popular Learning Modern 3D Graphics Programming by Jason L. McKesson, located here: http://www.arcsynthesis.org/gltut/ I read through the introductory bits, and in the "About This Book" section, the author states: "First, much of what is learned with this approach must be inevitably abandoned when the user encounters a graphics problem that must be solved with programmability. Programmability wipes out almost all of the fixed function pipeline, so the knowledge does not easily transfer." yet at the same time also makes the case that fixed-functionality provides an easier, more immediate learning curve for beginners by stating: "It is generally considered easiest to teach neophyte graphics programmers using the fixed function pipeline." Naturally, you can see why I might be conflicted about which paradigm to learn: Do I spend a lot of time learning (and then later unlearning) the ways of fixed-functionality, or do I choose to start out with shaders? My primary concern is that modern programmable shaders somehow require the programmer to already understand the fixed-function pipeline, but I doubt that's the case. TL;DR == As an aspiring game graphics programmer, is it in my best interest to learn 3D programming through fixed-functionality or modern shader-based programming?

    Read the article

  • Get to Know a Candidate (6 of 25): Jill Stein&ndash;Green Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Stein is a physician with degrees from Harvard College and Harvard Medical School.  She serves on the boards of Greater Boston Physicians for Social Responsibility and MassVoters for Fair Elections, and has been active with the Massachusetts Coalition for Healthy Communities Jill Stein advocates a "Green New Deal" in which renewable energy jobs would be created to address climate change and environmental issues with the objective of employing "every American willing and able to work". Citing the research of Dr. Phillip Harvey, Professor of Law & Economics at Rutgers University, as evidence of the successful economic effects of the 1930s' New Deal projects, Stein would fund the plan with a 30% reduction in the U.S. military budget, returning US troops home, and increasing taxes on areas such as capital gains, offshore tax havens and multimillion dollar real estate. Stein plans on impacting what she sees as a growing convergence of environmental crises in water, soil, fisheries and forests, through the creation of sustainable infrastructure based in clean renewable energy generation and sustainable communities principles such as increasing intra-city mass transit and inter-city railroads, creating 'complete streets' that safely encourage bike and pedestrian traffic and regional food systems based on sustainable organic agriculture The Green Party of the United States was founded in 1991 as a voluntary association of state green parties. With its founding, the Green Party of the United States became the primary national Green organization in the United States, eclipsing the Greens/Green Party USA, which emphasized non-electoral movement building. The Green Party of the United States of America emphasizes environmentalism, non-hierarchical participatory democracy, social justice, respect for diversity, peace and nonviolence. Their "Ten Key Values," which are described as non-authoritative guiding principles, are as follows: Grassroots democracy Social justice and equal opportunity Ecological wisdom Nonviolence Decentralization Community-based economics Feminism and gender equality Respect for diversity Personal and global responsibility Future focus and sustainability The Green Party does not accept donations from corporations. Thus, the party's platforms and rhetoric critique any corporate influence and control over government, media, and American society at large. Stein has access to 403 electoral votes and is a write-in candidate in GA, IN, and MS Learn more about Jill Stein and Green Party on Wikipedia.

    Read the article

  • Where do I start in regards to making a Gnome/Unity Form Application

    - by JMK
    Ok so I am familiar with developing Form and Console applications on Windows using Visual Studio .Net with C#, but where do I start when it comes to Linux distro's like Ubuntu, is there an equivalent? How would one go about matching what they can do in a Windows environment with .Net and C# in a Linux environment without .Net coding in something like Java or C/C++? I am aware of Eclipse, does eclipse have a form designer or do you have to code the design of any Gnome/Unity forms manually? Can I use eclipse to write the Linux equivalent of a console application, that you just double click on to run? I also know about Mono, but the idea is that I want to learn how to develop software without using anything in the Microsoft stack and am not sure where to start. What is the standard language/framework used to develop these types of applications on Linux? As I become more proficient with Visual Studio, C# and .Net, it has struck me that without these Microsoft tools, I am nothing. I am only capable of developing for the Microsoft OS and this scares me. This isn't some anti Microsoft thing, Microsoft makes some incredible Software/Hardware/Operating Systems/IDE's, but it is generally a bad idea to put all of your eggs in one basket so if I want to learn how to develop Terminal and Gnome/Unity form applications where in the world do I start? I have used Linux on and off for years, but Windows has been my primary OS. However I have watched Linux get better and better and as much as I love Windows 7, I am dubious about Windows 8 (I for one will sorely miss my start menu)! Obviously MS aren't going anywhere anytime soon and I could spend the the next couple of decades developing for .Net without any issues but just because you can get away with something doesn't always mean it's a good idea. Thanks

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >