Search Results

Search found 10538 results on 422 pages for 'push technology'.

Page 182/422 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • GP11.1

    - by user13334066
    It's the Assen round of the 2011 motogp season, and Ducati have launched their GP11.1. The Ducati's front end woes were quite efficiently highlighted throughout the 2010 season, with both Casey and Nicky regularly visiting the gravel traps. Now the question is: was it really a front end issue. What's most probable is: the GP10 never had a front end issue. It was the rear that was out. So what did Stoner's team do? They came with setup changes that sorted out the rear end, while transferring the problem to the front. And Casey has this brilliant ability to push beyond the limits of a vague and erratic front end...and naturally the real problem lay hidden. Like Kevin Cameron said: in human nature, our strengths are our weaknesses. Casey's pure speed came at a lack of fine machinery feel, which ultimately took the Ducati in a wrong development direction.

    Read the article

  • Ajax does not send the data to my php file [migrated]

    - by Mert METIN
    I try to send my data to php file but does not work. This my ajax file var artistIds = new Array(); $(".p16 input:checked").each(function(){ artistIds.push($(this).attr('id')); }); $.post('/json/crewonly/deleteDataAjax2', { artistIds: artistIds },function(response){ if(response == 'ok') alert('dolu'); elseif (response == 'error') alert('bos'); }); and this is my php public function deleteDataAjax2() { extract($_POST); if (isset($artistIds)) $this->sendJSONResponse('ok'); else $this->sendJSONResponse('error'); } However, my artistIds in php side is null. Why ?

    Read the article

  • Continuous integration - build Debug and Release every time?

    - by Darian Miller
    Is it standard practice when setting up a Continuous Integration server to build a Debug and Release version of each project? Most of the time developers code with a Debug mode project configuration set enabled and there could be different library path configurations, compiler defines, or other items configured differently between Debug/Release that would cause them to act differently. I configured my CI server to build both Debug & Release of each project and I'm wondering if I'm just overthinking it. My assumption is that I'll do this as long as I can get quick feedback and once that happens, then push the Release off to a nightly build perhaps. Is there a 'standard' way of approaching this?

    Read the article

  • Great Discussion of ETL and ELT Tooling in TDWI Linkedin Group

    - by antonio romero
    All, There’s a great discussion of ETL and ELT tooling going on in the official TDWI Linkedin group, under the heading “How Sustainable is SQL for ETL?” It delves into a wide range of topics: The pros and cons of handcoding vs. using tools to design ETL ETL (with separate transformation engines) vs. ELT (transforms in the database) and push-down solutions The future of ETL and data warehousing products A number of community members (of varying affiliations) have kept this conversation going for many months, and are learning from each other as they go. So check it out… Also, while you’re on Linkedin, join the Oracle ETL/Data Integration Linkedin group (for both OWB and ODI users), which recently passed the 2000 member mark.

    Read the article

  • What are tangible advantages to proper Unit Tests over Functional Test called unit tests

    - by Jackie
    A project I am working on has a bunch of legacy tests that were not properly mocked out. Because of this the only dependency it has is EasyMock, which doesn't support statics, constructors with arguments, etc. The tests instead rely on database connections and such to "run" the tests. Adding powermock to handle these cases is being shot down as cost prohibitive due to the need to upgrade the existing project to support it (Another discussion). My questions are, what are the REAL world tangible benifits of proper unit testing I can use to push back? Are there any? Am I just being a stickler by saying that bad unit tests (even if they work) are bad? Is code coverage just as effective?

    Read the article

  • Bitbucket and a small development house

    - by Marlon
    I am in the process of finally rolling Mercurial as our version control system at work. This is a huge deal for everyone as, shockingly, they have never used a VCS. After months of putting the bug in management's ears, they finally saw the light and now realise how much better it is than working with a network of shared folders! In the process of rolling this out, I am thinking of different strategies to manage our stuff and I am leaning towards using Bitbucket as our "central" repository. The projects in Bitbucket will solely be private projects and everyone will push and pull from there. I am open to different suggestions, but has anyone got a similar setup? If so, what caveats have you encountered?

    Read the article

  • 3 Performance Presentations from SAE added to the portal

    - by uwes
    The following three presentation have been added to eSTEP portal: Oracle's Systems Performance Oct 2012 Update Oracle Leads the Way on Realistic Sizing Oracle's Performance: Oracle SPARC SuperCluster All presentations are created by Brad Carlile, Sr. Director Strategic Applications Engineering, SAE. How to get to the presentations: URL: http://launch.oracle.com/ Email Address: <provide your email address>Access URL/Page Token: eSTEP_2011To get access push Agree button on the left side of the page. Click on eSTEP Download (tab band on the top) ---> presentations at right hand side or Click on Miscellaneous (menu on left hand side) ---> presentations at right hand side

    Read the article

  • How to make rigid bodies collide with Apex Clothing in PhysX for Maya

    - by b1nary.atr0phy
    According to the [Apex] Clothing Overview section of the documentation: Colliding with Rigid Bodies Rigid bodies present in your scene will push clothing around roughly as you might expect. Well, I beg to differ. The Apex Cloth collides with the floor just fine, but that's about the only thing it collides with (unless I add ragdoll to the same skeleton that the cloth is attached to.) So for example, if I try to bounce a ball (dynamic rigid body) into the cloth, it simply bounces through it. If I try to walk an actor with ragdoll through it, he simply clips through it as well. Anyone have any insight on this?

    Read the article

  • Abstract Data Type and Data Structure

    - by mark075
    It's quite difficult for me to understand these terms. I searched on google and read a little on Wikipedia but I'm still not sure. I've determined so far that: Abstract Data Type is a definition of new type, describes its properties and operations. Data Structure is an implementation of ADT. Many ADT can be implemented as the same Data Structure. If I think right, array as ADT means a collection of elements and as Data Structure, how it's stored in a memory. Stack is ADT with push, pop operations, but can we say about stack data structure if I mean I used stack implemented as an array in my algorithm? And why heap isn't ADT? It can be implemented as tree or an array.

    Read the article

  • Architecture guidelines for a "single page web-app"

    - by Matt Roberts
    I'm going to start a side project to build a "single page" web application. The application needs to be real-time, sending updates to the clients as changes happen. Are there any good resources for best-practice approaches wrt the architecture for these kinds of applications. The best resource I've found so far is the trello architecture article here: http://blog.fogcreek.com/the-trello-tech-stack/ To me, this architecture, although very sexy, is probably over-engineered for my specific needs - although I do have similar requirements. I'm wondering if I need to bother with a sub/pub at the server side, could I not just push updates from the server when something happens (e.g. when the client sends an update to the server, write the update to the db, and then send an update to the clients). Tech-wise, I'm probably looking to build this out in Node.JS or maybe Ruby, although the architecture guidelines should to some extent apply to any underlying server technologies.

    Read the article

  • How/where to run the algorithm on large dataset?

    - by niko
    I would like to run the PageRank algorithm on graph with 4 000 000 nodes and around 45 000 000 edges. Currently I use neo4j graph databse and classic relational database (postgres) and for software projects I mostly use C# and Java. Does anyone know what would be the best way to perform a PageRank computation on such graph? Is there any way to modify the PageRank algorithm in order to run it at home computer or server (48GB RAM) or is there any useful cloud service to push the data along the algorithm and retrieve the results? At this stage the project is at the research stage so in case of using cloud service if possible, would like to use such provider that doesn't require much administration and service setup, but instead focus just on running the algorith once and get the results without much overhead administration work.

    Read the article

  • How can a student programmer improve his teamwork skill?

    - by xiao
    I am a student right now. Recently, I am working in a project as a leader with three other students. Due to the lack of experience, our project is progressing slowly and our members are frustrated. They do not feel sense of accomplishment in the project. I am pressured and frustrated, too. But as a team leader, I think I need to push them. But I do not know how to do. Do I help them solve coding problem or just encouragement? But if I pay too much attention on it, it would slow down my own progress. It is a not technical question, but it is very common in software development. I hope veteran programmers would give me some suggestions. Thanks!

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle provides many interesting ways to ensure High Availability. Dataguard configurations, RAC configurations or even both (as recommended for a Maximum Available Architecture - MAA) are the most frequently found. But when it comes to protecting a system with an Active/Passive architecture with failover capabilities, one often thinks to expensive third party cluster systems. Oracle Clusterware technology, which comes free with Oracle Database, is – in the knowing of most people - often linked to Oracle RAC and therefore, is rarely used to implement failover solutions. 11gR2 Clusterware – which is part of Oracle Grid Infrastructure - provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make “failover-able'” and, therefore to protect, almost every kind of application (from xclock to the more complex Application Server) In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional. Unfortunately, there can always be a typo or a mistake. This blog entry is not a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it…   Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 – Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let’s name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill –9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let’s save the definition of this resource, for future use : mkdir –p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let’s remove it from the OCR definitions, we don’t need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop and check the database. It is self-explaining and contains nothing special. Just be aware that it is run as Oracle user (because of the ACL property – see later) and needs to know about the environment. It also needs to be present on every node of the cluster. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more “MS$ like” method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions.   References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • Ho do I install Ubuntu on my Mac PowerPC G5

    - by Matt
    How do I install Ubuntu on my powerpc G5? which version do I download? where do I download it from? and how do I get it to install? I tried burning ubuntu powerpc 12.04 and booting from the cd and all I get is a DOS like setup prompt "boot:" I've tried 'live' and everything else listed when I push tab; but, every time I get a bunch of white text on black screen, then black text on white and then my monitor just goes black and nothing happens??? what am I doing wrong? any suggestions?

    Read the article

  • What are the most important concepts to understand for "fluency in developer English"?

    - by Edward Tanguay
    In April, I'm going to be giving a talk called **English 2.0 - Understanding the Language of Developers" to a group of English teachers. The purpose is in two hours to give them a quick background in key concepts so that they can better understand developer blogs and podcasts and are able to ask better questions when talking to developers. What do you think are the most important concepts to understand, concepts that developers take for granted but the general public is not familiar with? Here are a few ideas: version control abstractions pub/sub push vs. pull debugging modularity three-tier architecture class/object "spaghetti code" vs. OOP exception throwing crowd sourcing refactoring the cloud DRY - don't repeat yourself client/server unit testing designer/developer

    Read the article

  • How to Tell If Your Computer is Overheating and What to Do About It

    - by Chris Hoffman
    Heat is a computer’s enemy. Computers are designed with heat dispersion and ventilation in mind so they don’t overheat. If too much heat builds up, your computer may become unstable or suddenly shut down. The CPU and graphics card produce much more heat when running demanding applications. If there’s a problem with your computer’s cooling system, an excess of heat could even physically damage its components. Is Your Computer Overheating? When using a typical computer in a typical way, you shouldn’t have to worry about overheating at all. However, if you’re encountering system instability issues like abrupt shut downs, blue screens, and freezes — especially while doing something demanding like playing PC games or encoding video — your computer may be overheating. This can happen for several reasons. Your computer’s case may be full of dust, a fan may have failed, something may be blocking your computer’s vents, or you may have a compact laptop that was never designed to run at maximum performance for hours on end. Monitoring Your Computer’s Temperature First, bear in mind that different CPUs and GPUs (graphics cards) have different optimal temperature ranges. Before getting too worried about a temperature, be sure to check your computer’s documentation — or its CPU or graphics card specifications — and ensure you know the temperature ranges your hardware can handle. You can monitor your computer’s temperatures in a variety of different ways. First, you may have a way to monitor temperature that is already built into your system. You can often view temperature values in your computer’s BIOS or UEFI settings screen. This allows you to quickly see your computer’s temperature if Windows freezes or blue screens on you — just boot the computer, enter the BIOS or UEFI screen, and check the temperatures displayed there. Note that not all BIOSes or UEFI screens will display this information, but it is very common. There are also programs that will display your computer’s temperature. Such programs just read the sensors inside your computer and show you the temperature value they report, so there are a wide variety of tools you can use for this, from the simple Speccy system information utility to an advanced tool like SpeedFan. HWMonitor also offer this feature, displaying a wide variety of sensor information. Be sure to look at your CPU and graphics card temperatures. You can also find other temperatures, such as the temperature of your hard drive, but these components will generally only overheat if it becomes extremely hot in the computer’s case. They shouldn’t generate too much heat on their own. If you think your computer may be overheating, don’t just glance as these sensors once and ignore them. Do something demanding with your computer, such as running a CPU burn-in test with Prime 95, playing a PC game, or running a graphical benchmark. Monitor the computer’s temperature while you do this, even checking a few hours later — does any component overheat after you push it hard for a while? Preventing Your Computer From Overheating If your computer is overheating, here are some things you can do about it: Dust Out Your Computer’s Case: Dust accumulates in desktop PC cases and even laptops over time, clogging fans and blocking air flow. This dust can cause ventilation problems, trapping heat and preventing your PC from cooling itself properly. Be sure to clean your computer’s case occasionally to prevent dust build-up. Unfortunately, it’s often more difficult to dust out overheating laptops. Ensure Proper Ventilation: Put the computer in a location where it can properly ventilate itself. If it’s a desktop, don’t push the case up against a wall so that the computer’s vents become blocked or leave it near a radiator or heating vent. If it’s a laptop, be careful to not block its air vents, particularly when doing something demanding. For example, putting a laptop down on a mattress, allowing it to sink in, and leaving it there can lead to overheating — especially if the laptop is doing something demanding and generating heat it can’t get rid of. Check if Fans Are Running: If you’re not sure why your computer started overheating, open its case and check that all the fans are running. It’s possible that a CPU, graphics card, or case fan failed or became unplugged, reducing air flow. Tune Up Heat Sinks: If your CPU is overheating, its heat sink may not be seated correctly or its thermal paste may be old. You may need to remove the heat sink and re-apply new thermal paste before reseating the heat sink properly. This tip applies more to tweakers, overclockers, and people who build their own PCs, especially if they may have made a mistake when originally applying the thermal paste. This is often much more difficult when it comes to laptops, which generally aren’t designed to be user-serviceable. That can lead to trouble if the laptop becomes filled with dust and needs to be cleaned out, especially if the laptop was never designed to be opened by users at all. Consult our guide to diagnosing and fixing an overheating laptop for help with cooling down a hot laptop. Overheating is a definite danger when overclocking your CPU or graphics card. Overclocking will cause your components to run hotter, and the additional heat will cause problems unless you can properly cool your components. If you’ve overclocked your hardware and it has started to overheat — well, throttle back the overclock! Image Credit: Vinni Malek on Flickr     

    Read the article

  • Can WinRT really be used at just the boundaries?

    - by Bret Kuhns
    Microsoft (chiefly, Herb Sutter) recommends when using WinRT with C++/CX to keep WinRT at the boundaries of the application and keep the core of the application written in standard ISO C++. I've been writing an application which I would like to leave portable, so my core functionality was written in standard C++, and I am now attempting to write a Metro-style front end for it using C++/CX. I've had a bit of a problem with this approach, however. For example, if I want to push a vector of user-defined C++ types to a XAML ListView control, I have to wrap my user-defined type in a WinRT ref/value type for it to be stored in a Vector^. With this approach, I'm inevitably left with wrapping a large portion of my C++ classes with WinRT classes. This is the first time I've tried to write a portable native application in C++. Is it really practical to keep WinRT along the boundaries like this? How else could this type of portable core with a platform-specific boundary be handled?

    Read the article

  • Matrix multiplication - Scene Graphs

    - by bgarate
    I wrote a MatrixStack class in C# to use in a SceneGraph. So, to get the world matrix for an object I am suposed to use: WorldMatrix = ParentWorld * LocalTransform But, in fact, it only works as expected when I do the other way: WorldMatrix = LocalTransform * ParentWorld Mi code is: public class MatrixStack { Stack<Matrix> stack = new Stack<Matrix>(); Matrix result = Matrix.Identity; public void PushMatrix(Matrix matrix) { stack.Push(matrix); result = matrix * result; } public Matrix PopMatrix() { result = Matrix.Invert(stack.Peek()) * result; return stack.Pop(); } public Matrix Result { get { return result; } } public void Clear() { stack.Clear(); result = Matrix.Identity; } } Why it works this way and not the other? Thanks!

    Read the article

  • How to store character moves (sprite animations)?

    - by Saad
    So I'm thinking about making a small rpg, mainly to test out different design patterns I've been learning about. But the one question that I'm not too sure on how to approach is how to store an array of character moves in the best way possible. So let's say I have arrays of different sprites. This is how I'm thinking about implementing it: array attack = new array (10); array attack2 = new array(5); (loop) //blit some image attack.push(imageInstance); (end loop) Now every time I want the animation I call on attack or attack2; is there a better structure? The problem with this is let's say there are 100 different attacks, and a player can have up to 10 attacks equipped. So how do I tell which attack the user has; should I use a hash map?

    Read the article

  • [News] Repenser les IDE avec l'interface Code Bubbles

    Andrew Bragdon, ?tudiant surdou? d'une universit? am?ricaine, a repens? les interfaces graphiques des IDE pour remplacer les fen?tres par des bulles communicantes. Une id?e farfelue ? Pas du tout, la d?mo (ou plut?t la vid?o) est bluffante : "A bubble is a fully editable and interactive view of a fragment such as a method or collection of member variables. Bubbles, in contrast to windows, have minimal border decoration, avoid clipping their contents by using automatic code reflow and elision, and do not overlap but instead push each other out of the way". A d?couvrir absolument, c'est un concept d'avenir...

    Read the article

  • Welcome to the BI & Analytics Pulse Blog

    - by jacqueline.coolidge(at)oracle.com
    In this blog, we'll be taking the pulse of the BI and Analytics market.   We get to meet people who involved in every aspect of the market --  customers that push the envelope and use BI in innovative ways, software developers, product managers, and sales teams in the field.  This sparks lots of ideas.  We'll share our experience and ideas and hope to generate discussion on topics that reflect what's going on in the market and where it will go next.  First topics will include, self-service BI and in memory analytics.   Let us know what you think is interesting. 

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • Sql Server Data Tools & Entity Framework - is there any synergy here?

    - by Benjol
    Coming out of a project using Linq2Sql, I suspect that the next (bigger) one might push me into the arms of Entity Framework. I've done some reading-up on the subject, but what I haven't managed to find is a coherent story about how SQL Server Data Tools and Entity Framework should/could/might be used together. Were they conceived totally separately, and using them together is stroking the wrong way? Are they somehow totally orthogonal and I'm missing the point? Some reasons why I think I might want both: SSDT is great for having 'compiled' (checked) and easily versionable sql and schema But the SSDT 'migration/update' story is not convincing (to me): "Update anything" works ok for schema, but there's no way (AFAIK) that it can ever work for data. On the other hand, I haven't tried the EF migration to know if it presents similar problems, but the Up/Down bits look quite handy.

    Read the article

  • Dynamic vs Statically typed languages for websites

    - by Bradford
    Wanted to hear what others thought about this statement: I’ll contrast that with building a website. When rendering web pages, often you have very many components interacting on a web page. You have buttons over here and little widgets over there and there are dozens of them on a webpage, as well as possibly dozens or hundreds of web pages on your website that are all dynamic. With a system with a really large surface area like that, using a statically typed language is actually quite inflexible. I would find it painful probably to program in Scala and render a web page with it, when I want to interactively push around buttons and what-not. If the whole system has to be coherent, like the whole system has to type check just to be able to move a button around, I think that can be really inflexible. Source: http://www.infoq.com/interviews/kallen-scala-twitter

    Read the article

  • jQuery mobile List-View is not working after adding some jquery code [closed]

    - by Kaidul Islam Sazal
    I am using jquery mobile and I have an array makeArrayin jquery and I have created few listview by the values of the array.Everything works fine.But the jquery mobile list-view style is not shown. Rather it is shown an ordinary list view. This is my code: $(document).ready(function(){ var url = "inventory/inventory.json"; var makeArray = new Array(); $.getJSON(url, function(data){ $.each(data, function(index, item){ if(($.inArray(item.make, makeArray)) == -1){ makeArray.push(item.make); $('.upper_case') .append('<li data-icon="list-arrow"> <a href="trade_form.php?='+ item.make +'"><img src="images/car_logo/buick.png" class="ui-li-thumb"/>' + item.make + '</a></li>'); } }); }); });

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >