Search Results

Search found 16433 results on 658 pages for 'array sorting'.

Page 397/658 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • Programação paralela no .NET Framework 4 – Parte I

    - by anobre
    Introdução O avanço de tecnologia nos últimos anos forneceu, a baixo custo, acesso  a workstations com inúmeros CPUs. Facilmente encontramos hoje máquinas clientes com 2, 4 e até 8 núcleos, sem considerar os “super-servidores” com até 36 processadores :) Da wikipedia: A Unidade central de processamento (CPU, de acordo com as iniciais em inglês) ou o processador é a parte de um sistema de computador que executa as instruções de um programa de computador, e é o elemento primordial na execução das funções de um computador. Este termo tem sido usado na indústria de computadores pelo menos desde o início dos anos 1960[1]. A forma, desenho e implementação de CPUs têm mudado dramaticamente desde os primeiros exemplos, mas o seu funcionamento fundamental permanece o mesmo. Fazendo uma analogia, seria muito interessante delegarmos tarefas no mundo real que podem ser executadas independentemente a pessoas diferentes, atingindo desta forma uma  maior performance / produtividade na sua execução. A computação paralela se baseia na idéia que um problema maior pode ser dividido em problemas menores, sendo resolvidos de forma paralela. Este pensamento é utilizado há algum tempo por HPC (High-performance computing), e através das facilidades dos últimos anos, assim como a preocupação com consumo de energia, tornaram esta idéia mais atrativa e de fácil acesso a qualquer ambiente. No .NET Framework A plataforma .NET apresenta um runtime, bibliotecas e ferramentas para fornecer uma base de acesso fácil e rápido à programação paralela, sem trabalhar diretamente com threads e thread pool. Esta série de posts irá apresentar todos os recursos disponíveis, iniciando os estudos pela TPL, ou Task Parallel Library. Task Parallel Library A TPL é um conjunto de tipos localizados no namespace System.Threading e System.Threading.Tasks, a partir da versão 4 do framework. A partir da versão 4 do framework, o TPL é a maneira recomendada para escrever código paralelo e multithreaded. http://msdn.microsoft.com/en-us/library/dd460717(v=VS.100).aspx Task Parallelism O termo “task parallelism”, ou em uma tradução live paralelismo de tarefas, se refere a uma ou mais tarefas sendo executadas de forma simultanea. Considere uma tarefa como um método. A maneira mais fácil de executar tarefas de forma paralela é o código abaixo: Parallel.Invoke(() => TrabalhoInicial(), () => TrabalhoSeguinte()); O que acontece de verdade? Por trás nos panos, esta instrução instancia de forma implícita objetos do tipo Task, responsável por representar uma operação assíncrona, não exatamente paralela: public class Task : IAsyncResult, IDisposable É possível instanciar Tasks de forma explícita, sendo uma alternativa mais complexa ao Parallel.Invoke. var task = new Task(() => TrabalhoInicial()); task.Start(); Outra opção de instanciar uma Task e já executar sua tarefa é: var t = Task<int>.Factory.StartNew(() => TrabalhoInicialComValor());var t2 = Task<int>.Factory.StartNew(() => TrabalhoSeguinteComValor()); A diferença básica entre as duas abordagens é que a primeira tem início conhecido, mais utilizado quando não queremos que a instanciação e o agendamento da execução ocorra em uma só operação, como na segunda abordagem. Data Parallelism Ainda parte da TPL, o Data Parallelism se refere a cenários onde a mesma operação deva ser executada paralelamente em elementos de uma coleção ou array, através de instruções paralelas For e ForEach. A idéia básica é pegar cada elemento da coleção (ou array) e trabalhar com diversas threads concomitantemente. A classe-chave para este cenário é a System.Threading.Tasks.Parallel // Sequential version foreach (var item in sourceCollection) { Process(item); } // Parallel equivalent Parallel.ForEach(sourceCollection, item => Process(item)); Complicado né? :) Demonstração Acesse aqui um vídeo com exemplos (screencast). Cuidado! Apesar da imensa vontade de sair codificando, tome cuidado com alguns problemas básicos de paralelismo. Neste link é possível conhecer algumas situações. Abraços.

    Read the article

  • Initializing and drawing a mesh using OpenTK

    - by Boreal
    I'm implementing a "Mesh" class to use in my OpenTK game. You pass in a vertex array and an index array, and then you can call Mesh.Draw() to draw it using a shader. I've heard VBO's and VAO's are the way to go for this approach, but nowhere have I found a guide that shows how to get Data Video Memory Shader. Can someone give me a quick rundown of how this works? EDIT: So far, I have this: struct Vertex { public Vector3 position; public Vector3 normal; public Vector3 color; public static int memSize = 9 * sizeof(float); public static byte[] memOffset = { 0, 3 * sizeof(float), 6 * sizeof(float) }; } class Mesh { private uint vbo; private uint ibo; // stores the numbers of vertices and indices private int numVertices; private int numIndices; public Mesh(int numVertices, Vertex[] vertices, int numIndices, ushort[] indices) { // set numbers this.numVertices = numVertices; this.numIndices = numIndices; // generate buffers GL.GenBuffers(1, out vbo); GL.GenBuffers(1, out ibo); GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // send data to the buffers GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(Vertex.memSize * numVertices), vertices, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, new IntPtr(sizeof(ushort) * numIndices), indices, BufferUsageHint.StaticDraw); } public void Render() { // bind buffers GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // define offsets GL.VertexPointer(3, VertexPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[0])); GL.NormalPointer(NormalPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[1])); GL.ColorPointer(3, ColorPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[2])); // draw GL.DrawElements(BeginMode.Triangles, numIndices, DrawElementsType.UnsignedInt, (IntPtr)0); } } class Application : GameWindow { Mesh triangle; protected override void OnLoad(EventArgs e) { base.OnLoad(e); GL.ClearColor(0.1f, 0.2f, 0.5f, 0.0f); GL.Enable(EnableCap.DepthTest); GL.Enable(EnableCap.VertexArray); GL.Enable(EnableCap.NormalArray); GL.Enable(EnableCap.ColorArray); Vertex v0 = new Vertex(); v0.position = new Vector3(-1.0f, -1.0f, 4.0f); v0.normal = new Vector3(0.0f, 0.0f, -1.0f); v0.color = new Vector3(1.0f, 1.0f, 0.0f); Vertex v1 = new Vertex(); v1.position = new Vector3(1.0f, -1.0f, 4.0f); v1.normal = new Vector3(0.0f, 0.0f, -1.0f); v1.color = new Vector3(1.0f, 0.0f, 0.0f); Vertex v2 = new Vertex(); v2.position = new Vector3(0.0f, 1.0f, 4.0f); v2.normal = new Vector3(0.0f, 0.0f, -1.0f); v2.color = new Vector3(0.2f, 0.9f, 1.0f); Vertex[] va = { v0, v1, v2 }; ushort[] ia = { 0, 1, 2 }; triangle = new Mesh(3, va, 3, ia); } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); Matrix4 modelview = Matrix4.LookAt(Vector3.Zero, Vector3.UnitZ, Vector3.UnitY); GL.MatrixMode(MatrixMode.Modelview); GL.LoadMatrix(ref modelview); triangle.Render(); SwapBuffers(); } } It doesn't draw anything.

    Read the article

  • PHP PSR-0 + several namespaces in one file and autoload

    - by Nemoden
    I've been thinking for a while about defining several namespaces in one php file and so, having several classes inside this file. Suppose, I want to implement something like Doctrine\ORM\Query\Expr: Expr.php Expr |-- Andx.php |-- Base.php |-- Comparison.php |-- Composite.php |-- From.php |-- Func.php |-- GroupBy.php |-- Join.php |-- Literal.php |-- Math.php |-- OrderBy.php |-- Orx.php `-- Select.php It would be nice if I had all of this in one file - Expr.php: namespace Doctrine\ORM\Query; class Expr { // code } namespace Doctrine\ORM\Query\Expr; class Func { // code } // etc... What I'm thinking of is directories naming convention and, unlike PSR-0 having several classes and namespaces in one file. It's best explained by the code: ls Doctrine/orm/query Expr.php that's it - only Expr.php Since Expr.php is somewhat I call a "meta-namespace" for Expr\Func, it make sense to place all the classes inside Expr.php (as shown above). So, the vendor name is still starts with an uppercased letter (Doctrine) and the other parts of namespace start with lowercased letter. We can write an autoload so it would respect this notion: function load_class($class) { if (class_exists($class)) { return true; } $tokenized_path = explode(array("_", "\\"), DIRECTORY_SEPARATOR, $class); // array('Doctrine', 'orm', 'query', 'Expr', 'Func'); // ^^^^ // first, we are looking for first uppercased namespace part // and if it's not last (not the class name), we use it as a filename // and wiping away the rest to compose a path to a file we need to include if (FALSE !== ($meta_class_index = find_meta_class($tokenized_path))) { $new_tokenized_path = array_slice($tokenized_path, 0, $meta_class_index); $path_to_class = implode(DIRECTORY_SEPARATOR, $new_tokenized_path); } else { // no meta class found $path_to_class = implode(DIRECTORY_SEPARATOR, $tokenized_path); } if (file_exists($path_to_class.'.php')) { require_once $path_to_class.'.php'; } return false; } Another reason to do so is to reduce a number of php files scattered among directories. Usually you check file existence before you require a file to fail gracefully: file_exists($path_to_class.'.php'); If you take a look at actual Doctrine\ORM\Query\Expr code, you'll see they use all of the "inner-classes", so you actually do: file_exists("/path/to/Doctrine/ORM/Query/Expr.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/AndX.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Base.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Comparison.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Composite.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/From.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Func.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/GroupBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Join.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Literal.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Math.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/OrderBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Orx.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Select.php"); in your autoload which causes quite a few I/O reads. Isn't it too much to check on each user's hit? I'm just putting this on a discussion. I want to hear from another PHP programmers what do they think of it. And, of course, if you have a silver bullet addressing this problems I've designated here, please share. I also have been thinking if my vogue question fits here and according to the FAQ it seems like this question addresses "software architecture" problem slash proposal. I'm sorry if my scribble may seem a bit clunky :) Thanks.

    Read the article

  • Download YouTube Videos the Easy Way

    - by Trevor Bekolay
    You can’t be online all the time, and despite the majority of YouTube videos being nut-shots and Lady Gaga parodies, there is a lot of great content that you might want to download and watch offline. There are some programs and browser extensions to do this, but we’ve found that the easiest and quickest method is a bookmarklet that was originally posted on the Google Operating System blog (it’s since been removed). It will let you download standard quality and high-definition movies as MP4 files. Also, because it’s a bookmarklet, it will work on any modern web browser, and on any operating system! Installing the bookmarket is easy – just drag and drop the Get YouTube video link below to the bookmarks bar of your browser of choice. If you’ve hidden the bookmark bar, in most browsers you can right-click on the link and save it to your bookmarks. Get YouTube video   With the bookmarklet available in your browser, go to the YouTube video that you’d like to download. Click on the Get YouTube video link in your bookmarks bar, or in the bookmarks menu, wherever you saved it earlier. You will notice some new links appear below the description of the video. If you download the standard definition file, it will save as “video.mp4” by default. However, if you download the high definition file, it will save with the same name as the title of the video. There are many methods of downloading YouTube videos…but we think this is the easiest and quickest method. You don’t have to install anything or use up resources, but you can still get a link to download an MP4 with one click. Do you use a different method to download Youtube videos? Let us know about it in the comments! javascript:(function(){if(document.getElementById(’download-youtube-video’))return;var args=null,video_title=null,video_id=null,video_hash=null;var download_code=new Array();var fmt_labels={‘18′:’standard%20MP4′,’22′:’HD%20720p’,'37′:’HD%201080p’};try{args=yt.getConfig(’SWF_ARGS’);video_title=yt.getConfig(’VIDEO_TITLE’)}catch(e){}if(args){var fmt_url_map=unescape(args['fmt_url_map']);if(fmt_url_map==”)return;video_id=args['video_id'];video_hash=args['t'];video_title=video_title.replace(/[%22\'\?\\\/\:\*%3C%3E]/g,”);var fmt=new Array();var formats=fmt_url_map.split(’,');var format;for(var i=0;i%3Cformats.length;i++){var format_elems=formats[i].split(’|');fmt[format_elems[0]]=unescape(format_elems[1])}for(format in fmt_labels){if(fmt[format]!=null){download_code.push(’%3Ca%20href=\”+(fmt[format]+’&title=’+video_title)+’\'%3E’+fmt_labels[format]+’%3C/a%3E’)}elseif(format==’18′){download_code.push(’%3Ca%20href=\’http://www.youtube.com/get_video?fmt=18&video_id=’+video_id+’&t=’+video_hash+’\'%3E’+fmt_labels[format]+’%3C/a%3E’)}}}if(video_id==null||video_hash==null)return;var div_embed=document.getElementById(’watch-embed-div’);if(div_embed){var div_download=document.createElement(’div’);div_download.innerHTML=’%3Cbr%20/%3E%3Cspan%20id=\’download-youtube-video\’%3EDownload:%20′+download_code.join(’%20|%20′)+’%3C/span%3E’;div_embed.appendChild(div_download)}})() Similar Articles Productive Geek Tips Watch YouTube Videos in Cinema Style in FirefoxDownload YouTube Videos with Cheetah YouTube DownloaderStop YouTube Videos from Automatically Playing in FirefoxImprove YouTube Video Viewing in Google ChromeConvert YouTube Videos to MP3 with YouTube Downloader TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • Sun Fire X4800 M2 Posts World Record x86 SPECjEnterprise2010 Result

    - by Brian
    Oracle's Sun Fire X4800 M2 using the Intel Xeon E7-8870 processor and Sun Fire X4470 M2 using the Intel Xeon E7-4870 processor, produced a world record single application server SPECjEnterprise2010 benchmark result of 27,150.05 SPECjEnterprise2010 EjOPS. The Sun Fire X4800 M2 server ran the application tier and the Sun Fire X4470 M2 server was used for the database tier. The Sun Fire X4800 M2 server demonstrated 63% better performance compared to IBM P780 server result of 16,646.34 SPECjEnterprise2010 EjOPS. The Sun Fire X4800 M2 server demonstrated 4% better performance than the Cisco UCS B440 M2 result, both results used the same number of processors. This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.7.0_02, and Oracle Database 11g. This result was produced using Oracle Linux. Performance Landscape Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below compares against the best results from IBM and Cisco. SPECjEnterprise2010 Performance Chart as of 3/12/2012 Submitter EjOPS* Application Server Database Server Oracle 27,150.05 1x Sun Fire X4800 M2 8x 2.4 GHz Intel Xeon E7-8870 Oracle WebLogic 12c 1x Sun Fire X4470 M2 4x 2.4 GHz Intel Xeon E7-4870 Oracle Database 11g (11.2.0.2) Cisco 26,118.67 2x UCS B440 M2 Blade Server 4x 2.4 GHz Intel Xeon E7-4870 Oracle WebLogic 11g (10.3.5) 1x UCS C460 M2 Blade Server 4x 2.4 GHz Intel Xeon E7-4870 Oracle Database 11g (11.2.0.2) IBM 16,646.34 1x IBM Power 780 8x 3.86 GHz POWER 7 WebSphere Application Server V7 1x IBM Power 750 Express 4x 3.55 GHz POWER 7 IBM DB2 9.7 Workgroup Server Edition FP3a * SPECjEnterprise2010 EjOPS, bigger is better. Configuration Summary Application Server: 1 x Sun Fire X4800 M2 8 x 2.4 GHz Intel Xeon processor E7-8870 256 GB memory 4 x 10 GbE NIC 2 x FC HBA Oracle Linux 5 Update 6 Oracle WebLogic Server 11g Release 1 (10.3.5) Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.7.0_02 (Java SE 7 Update 2) Database Server: 1 x Sun Fire X4470 M2 4 x 2.4 GHz Intel Xeon E7-4870 512 GB memory 4 x 10 GbE NIC 2 x FC HBA 2 x Sun StorageTek 2540 M2 4 x Sun Fire X4270 M2 4 x Sun Storage F5100 Flash Array Oracle Linux 5 Update 6 Oracle Database 11g Enterprise Edition Release 11.2.0.2 Benchmark Description SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems. The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans. The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network. The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark. Key Points and Best Practices Sixteen Oracle WebLogic server instances were started using numactl, binding 2 instances per chip. Eight Oracle database listener processes were started, binding 2 instances per chip using taskset. Additional tuning information is in the report at http://spec.org. See Also Oracle Press Release -- SPECjEnterprise2010 Results Page Sun Fire X4800 M2 Server oracle.com OTN Sun Fire X4270 M2 Server oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Oracle Linux oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Disclosure Statement SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Sun Fire X4800 M2, 27,150.05 SPECjEnterprise2010 EjOPS; IBM Power 780, 16,646.34 SPECjEnterprise2010 EjOPS; Cisco UCS B440 M2, 26,118.67 SPECjEnterprise2010 EjOPS. Results from www.spec.org as of 3/27/2012.

    Read the article

  • Oracle TimesTen In-Memory Database Performance on SPARC T4-2

    - by Brian
    The Oracle TimesTen In-Memory Database is optimized to run on Oracle's SPARC T4 processor platforms running Oracle Solaris 11 providing unsurpassed scalability, performance, upgradability, protection of investment and return on investment. The following demonstrate the value of combining Oracle TimesTen In-Memory Database with SPARC T4 servers and Oracle Solaris 11: On a Mobile Call Processing test, the 2-socket SPARC T4-2 server outperforms: Oracle's SPARC Enterprise M4000 server (4 x 2.66 GHz SPARC64 VII+) by 34%. Oracle's SPARC T3-4 (4 x 1.65 GHz SPARC T3) by 2.7x, or 5.4x per processor. Utilizing the TimesTen Performance Throughput Benchmark (TPTBM), the SPARC T4-2 server protects investments with: 2.1x the overall performance of a 4-socket SPARC Enterprise M4000 server in read-only mode and 1.5x the performance in update-only testing. This is 4.2x more performance per processor than the SPARC64 VII+ 2.66 GHz based system. 10x more performance per processor than the SPARC T2+ 1.4 GHz server. 1.6x better performance per processor than the SPARC T3 1.65 GHz based server. In replication testing, the two socket SPARC T4-2 server is over 3x faster than the performance of a four socket SPARC Enterprise T5440 server in both asynchronous replication environment and the highly available 2-Safe replication. This testing emphasizes parallel replication between systems. Performance Landscape Mobile Call Processing Test Performance System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 218,400 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 162,900 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 80,400 TimesTen Performance Throughput Benchmark (TPTBM) Read-Only System Processor Sockets/Cores/Threads Tps SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 7.9M SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 6.5M M4000 SPARC64 VII+, 2.66 GHz 4 16 32 3.1M T5440 SPARC T2+, 1.4 GHz 4 32 256 3.1M TimesTen Performance Throughput Benchmark (TPTBM) Update-Only System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 547,800 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 363,800 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 240,500 TimesTen Replication Tests System Processor Sockets/Cores/Threads Asynchronous 2-Safe SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 38,024 13,701 SPARC T5440 SPARC T2+, 1.4 GHz 4 32 256 11,621 4,615 Configuration Summary Hardware Configurations: SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 4 x 300 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head SPARC T3-4 server 4 x SPARC T3 processors, 1.6 GHz 512 GB memory 1 x 8 Gbs FC Qlogic HBA 8 x 146 GB internal disks 1 x Sun Fire X4275 server configured as COMSTAR head SPARC Enterprise M4000 server 4 x SPARC64 VII+ processors, 2.66 GHz 128 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 2 x 146 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head Software Configuration: Oracle Solaris 11 11/11 Oracle TimesTen 11.2.2.4 Benchmark Descriptions TimesTen Performance Throughput BenchMark (TPTBM) is shipped with TimesTen and measures the total throughput of the system. The workload can test read-only, update-only, delete and insert operations as required. Mobile Call Processing is a customer-based workload for processing calls made by mobile phone subscribers. The workload has a mixture of read-only, update, and insert-only transactions. The peak throughput performance is measured from multiple concurrent processes executing the transactions until a peak performance is reached via saturation of the available resources. Parallel Replication tests using both asynchronous and 2-Safe replication methods. For asynchronous replication, transactions are processed in batches to maximize the throughput capabilities of the replication server and network. In 2-Safe replication, also known as no data-loss or high availability, transactions are replicated between servers immediately emphasizing low latency. For both environments, performance is measured in the number of parallel replication servers and the maximum transactions-per-second for all concurrent processes. See Also SPARC T4-2 Server oracle.com OTN Oracle TimesTen In-Memory Database oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • concurrency::accelerator_view

    - by Daniel Moth
    Overview We saw previously that accelerator represents a target for our C++ AMP computation or memory allocation and that there is a notion of a default accelerator. We ended that post by introducing how one can obtain accelerator_view objects from an accelerator object through the accelerator class's default_view property and the create_view method. The accelerator_view objects can be thought of as handles to an accelerator. You can also construct an accelerator_view given another accelerator_view (through the copy constructor or the assignment operator overload). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator_view objects between them to determine if they refer to the same underlying accelerator. We'll see later that when we use concurrency::array objects, the allocation of data takes place on an accelerator at array construction time, so there is a constructor overload that accepts an accelerator_view object. We'll also see later that a new concurrency::parallel_for_each function overload can take an accelerator_view object, so it knows on what target to execute the computation (represented by a lambda that the parallel_for_each also accepts). Beyond normal usage, accelerator_view is a quality of service concept that offers isolation to multiple "consumers" of an accelerator. If in your code you are accessing the accelerator from multiple threads (or, in general, from different parts of your app), then you'll want to create separate accelerator_view objects for each thread. flush, wait, and queuing_mode When you create an accelerator_view via the create_view method of the accelerator, you pass in an option of immediate or deferred, which are the two members of the queuing_mode enum. At any point you can access this value from the queuing_mode property of the accelerator_view. When the queuing_mode value is immediate (which is the default), any commands sent to the device such as kernel invocations and data transfers (e.g. parallel_for_each and copy, as we'll see in future posts), will get submitted as soon as the runtime sees fit (that is the definition of immediate). When the value of queuing_mode is deferred, the commands will be batched up. To send all buffered commands to the device for execution, there is a non-blocking flush method that you can call. If you wish to block until all the commands have been sent, there is a wait method you can call. Deferring is a more advanced scenario aimed at performance gains when you are submitting many device commands and you want to avoid the tiny overhead of flushing/submitting each command separately. Querying information Just like accelerator, accelerator_view exposes the is_debug and version properties. In fact, you can always access the accelerator object from the accelerator property on the accelerator_view class to access the accelerator interface we looked at previously. Interop with D3D (aka DX) In a later post I'll show an example of an app that uses C++ AMP to compute data that is used in pixel shaders. In those scenarios, you can benefit by integrating C++ AMP into your graphics pipeline and one of the building blocks for that is being able to use the same device context from both the compute kernel and the other shaders. You can do that by going from accelerator_view to device context (and vice versa), through part of our interop API in amp.h: *get_device, create_accelerator_view. More on those in a later post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • glsl shader to allow color change of skydome ogre3d

    - by Tim
    I'm still very new to all this but learning a lot. I'm putting together an application using Ogre3d as the rendering engine. So far I've got it running, with a simple scene, a day/night cycle system which is working okay. I'm now moving on to looking at changing the color of the skydome material based on the time of day. What I've done so far is to create a struct to hold the ColourValues for the different aspects of the scene. struct todColors { Ogre::ColourValue sky; Ogre::ColourValue ambient; Ogre::ColourValue sun; }; I created an array to store all the colours todColors sceneColours [4]; I populated the array with the colours I want to use for the various times of the day. For instance DayTime (when the sun is high in the sky) sceneColours[2].sky = Ogre::ColourValue(135/255, 206/255, 235/255, 255); sceneColours[2].ambient = Ogre::ColourValue(135/255, 206/255, 235/255, 255); sceneColours[2].sun = Ogre::ColourValue(135/255, 206/255, 235/255, 255); I've got code to work out the time of the day using a float currentHours to store the current hour of the day 10.5 = 10:30 am. This updates constantly and updates the sun as required. I am then calculating the appropriate colours for the time of day when relevant using else if( currentHour >= 4 && currentHour < 7) { // Lerp from night to morning Ogre::ColourValue lerp = Ogre::Math::lerp<Ogre::ColourValue, float>(sceneColours[GT_TOD_NIGHT].sky , sceneColours[GT_TOD_MORNING].sky, (currentHour - 4) / (7 - 4)); } My original attempt to get this to work was to dynamically generate a material with the new colour and apply that material to the skydome. This, as you can probably guess... didn't go well. I know it's possible to use shaders where you can pass information such as colour to the shader from the code but I am unsure if there is an existing simple shader to change a colour like this or if I need to create one. What is involved in creating a shader and material definition that would allow me to change the colour of a material without the overheads of dynamically generating materials all the time? EDIT : I've created a glsl vertex and fragment shaders as follows. Vertex uniform vec4 newColor; void main() { gl_FrontColor = newColor; gl_Position = ftransform(); } Fragment void main() { gl_FragColor = gl_Color; } I can pass a colour to it using ShaderDesigner and it seems to work. I now need to investigate how to use it within Ogre as a material. EDIT : I created a material file like this : vertex_program colour_vs_test glsl { source test.vert default_params { param_named newColor float4 0.0 0.0 0.0 1 } } fragment_program colour_fs_glsl glsl { source test.frag } material Test/SkyColor { technique { pass { lighting off fragment_program_ref colour_fs_glsl { } vertex_program_ref colour_vs_test { } } } } In the code I have tried : Ogre::MaterialPtr material = Ogre::MaterialManager::getSingleton().getByName("Test/SkyColor"); Ogre::GpuProgramParametersSharedPtr params = material->getTechnique(0)->getPass(0)->getVertexProgramParameters(); params->setNamedConstant("newcolor", Ogre::Vector4(0.7, 0.5, 0.3, 1)); I've set that as the Skydome material which seems to work initially. I am doing the same with the code that is attempting to lerp between colours, but when I include it there, it all goes black. Seems like there is now a problem with my colour lerping.

    Read the article

  • Filtering data in LINQ with the help of where clause

    - by vik20000in
     LINQ has bought with itself a super power of querying Objects, Database, XML, SharePoint and nearly any other data structure. The power of LINQ lies in the fact that it is managed code that lets you write SQL type code to fetch data.  Whenever working with data we always need a way to filter out the data based on different condition. In this post we will look at some of the different ways in which we can filter data in LINQ with the help of where clause. Simple Filter for an array. Let’s say we have an array of number and we want to filter out data based on some condition. Below is an example int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; var lowNums =                 from num in numbers                 where num < 5                 select num;   Filter based on one of the property in the class. With the help of LINQ we can also filer out data from a list based on value of some property. var soldOutProducts =                 from prod in products                 where prod.UnitsInStock == 0                 select prod; Filter based on Multiple of the property in the class. var expensiveInStockProducts =         from prod in products         where prod.UnitsInStock > 0 && prod.UnitPrice > 3.00M         select prod; Filter based on the index of the Item in the list.In the below example we can see that we are able to filter data based on the index of the item in the list. string[] digits = { "zero", "one", "two", "three", "four", "five", "six"}; var shortDigits = digits.Where((digit, index) => digit.Length < index); There are many other way in which we can filter out data in LINQ. In the above post I have tried and shown few ways using the LINQ. Vikram

    Read the article

  • Using the RSSBus Salesforce Excel Add-In From Excel Macros (VBA)

    - by dataintegration
    The RSSBus Salesforce Excel Add-In makes it easy to retrieve and update data from Salesforce from within Microsoft Excel. In addition to the built-in wizards that make data manipulation possible without code, the full functionality of the RSSBus Excel Add-Ins is available programmatically with Excel Macros (VBA) and Excel Functions. This article shows how to write an Excel macro that can be used to perform bulk inserts into Salesforce. Although this article uses the Salesforce Excel Add-In as an example, the same process can be applied to any of the Excel Add-Ins available on our website. Step 1: Download and install the RSSBus Excel Add-In available on our website. Step 2: Open Excel and create place holder cells for the connection details that are needed from the macro. In this article, a spreadsheet will be created for batch inserts, and these cells will store the connection details, and will be used to report the job Id, the batch Id, and the batch status. Step 3: Switch to the Developer tab in Excel. Add a new button on the spreadsheet, and create a new macro associated with it. This macro will contain the code needed to insert a batch of rows into Salesforce. Step 4: Add a reference to the Excel Add-In by selecting Tools --> References --> RSSBus Excel Add-In. The macro functions of the Excel Add-In will be available once the reference has been added. The following code shows how to call a Stored Procedure. In this example, a job is created to insert Leads by calling the CreateJob stored procedure. CreateJob returns a jobId that can be used to upload a large number of Leads in one transaction. Note the use of cells B1, B2, B3, and B4 that were created in Step 2 to read the connection settings from the Excel SpreadSheet and to write out the status of the procedure. methodName = "CreateJob" module.SetProviderName ("Salesforce") nameArray = Array("ObjectName", "Action", "ConcurrencyMode") valueArray = Array("Lead", "insert", "Serial") user = Range("B1").value pass = Range("B2").value atoken = Range("B3").value If (Not user = "" And Not pass = "" And Not atoken = "") Then module.SetConnectionString ("User=" + user + ";Password=" + pass + ";Access Token=" + atoken + ";") If module.CallSP(methodName, nameArray, valueArray) Then Dim ColumnCount As Integer ColumnCount = module.GetColumnCount Dim idIndex As Integer For Count = 0 To ColumnCount - 1 Dim colName As String colName = module.GetColumnName(Count) If module.GetColumnName(Count) = "id" Then idIndex = Count End If Next While (Not module.EOF) Range("B4").value = module.GetValue(idIndex) module.MoveNext Wend Else MsgBox "The CreateJob query failed." End If Exit Sub Else MsgBox "Please specify the connection details." Exit Sub End If Error: MsgBox "ERROR: " & Err.Description Step 5: Add the code to your macro. If you use the code above, you can check the results at Salesforce.com. They can be seen at Administration Setup -> Monitoring -> Bulk Data Load Jobs. Download the attached sample file for a more complete demo. Distributing an Excel File With Macros An Excel file with macros is saved using the .xlms extension. The code for the macro remains in the Excel file, and you can distribute your Excel file to any machine where the RSSBus Salesforce Excel Add-In is already installed. Macro Sample File Please download the fully functional sample excel file that includes the code referenced here. You will also need the RSSBus Excel Add-In to make the connection. You can download a free trial here. Note: You may get an error message stating: "Can't find project or library." in Excel 2007, since this example is made using Excel 2010. To resolve this, navigate to Tools -> References and uncheck the "MISSING: RSSBus Excel Add-In", then scroll down and check the "RSSBus Excel Add-In" listed below it.

    Read the article

  • 30 Steps to Master ASP.NET MVC Application development

    - by Rajesh Pillai
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Welcome Readers!,   I am starting out a new series on ASP.NET  MVC skill building which will be posted over the next couple of weeks.  Let me know your thoughts on the content, which I have planned and a couple of them has been taken from ASP.NET MVC2 Cookbook. (NOTE: Only the heading has been taken, the content will be not :)).   Do let me know what you would like to see, or any additional inputs or ideas to cover in this topics.  The 30 steps are oultined below for quick reference.  Will start filling this out quickly.   Outlined is the ‘30’ step to master ASP.NET MVC.   A Peek Into Model What is a model? Different types of model Presentation/ViewModel Model Mapping (AutoMapper)   A Peak into View How view works in ASP.NET MVC? View Engine Design Custom View Engine View Best Practices Templated Helpers Partial Views   A Peak into Controller Introduction Controller Design Controller Best Practices Asynchronous Controller Custom Action Result Action Filters Controller Factory to use with IOC   Routes Explanation Routes from the database Routes from XML More complex routing   Master Pages Basics Setting Master Page Dynamically   Working with data in the view Repeating Views Array of check boxes Array of radio buttons Paged data CRUD Client side action Confirmation Dialog (modal window) jqGrid   Working with Forms   Validation Model Validation with DataAnnotations Using the xVal validation framework Client side validation with jQuery Validation Fluent Validation Model Binders   Templating Create strongly typed helper using T4 Custom View Templates with T4 Create custom MVC project template using T4   IOC AutoFac Ninject Unity Application   Areas   jQuery, Ajax and jQuery Plugins   State Maintenance Application State User state Cookies Webfarm   Error Handling View error handling Controller error handling ELMAH (Error Logging Modules and Handlers)   Authentication and Authorization User Registration form SignOn Process Password Reminder Membership and Roles Windows authentication Restricting access to all pages Restricting access to selected pages Restricting access to pages by role Restricting access to a controller Restricting access to selected area   Profiles and Themes Using Profiles Inheriting a Profile Migrating an anonymous profile Creating custom themes Using themes User personalized themes   Configuration Adding custom application settings in web.config Displaying custom error messages Accessing other web.config configuration elements Adding custom configuration elements to web.config Encrypting web.config sections   Tracing, Debugging and Logging   Caching Caching a whole page Caching pages based on route details Caching pages based on browser type and version Caching pages based custom strings Caching partial pages Caching application data Object Caching Using Microsoft Velocity Using MemCache Using AppFabric cache   Localization   HTTP Handlers and Modules   Security XSS/CSRF AnitForgery Encoding   HtmlHelpers Strongly typed helpers Writing custom helpers   Repository Pattern (Data access)   WF/WCF   Unit Testing   Mocking Framework   Integration Testing   Load / Performance Testing   Deployment    Once again let me know your thoughts on this.   Till then, Enjoy MVC'ing!!!

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Ruby 1.9.3 segmentation fault on rackspace

    - by user531065
    I'm having a similar problem as "ruby 1.9 segmentation fault on exit." I followed the solution and updated libselinux with yum and have the following version installed: Package libselinux-2.0.96-6.fc14.1.x86_64 already installed and latest version I am running a RackSpace machine, kernel version: 2.6.35.4-rscloud. My C backtrace is: -- C level backtrace information ------------------------------------------- /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x18910a) [0x7f3cdb6c010a] vm_dump.c:796 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x5eb04) [0x7f3cdb595b04] error.c:258 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_bug+0xb8) [0x7f3cdb5969d8] error.c:277 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x119865) [0x7f3cdb650865] signal.c:609 /lib64/libpthread.so.0() [0x328540eeb0] /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x179272) [0x7f3cdb6b0272] ./include/ruby/ruby.h:1306 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x17f31b) [0x7f3cdb6b631b] vm.c:1220 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1800d5) [0x7f3cdb6b70d5] vm.c:624 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_yield+0x44) [0x7f3cdb6bb274] vm.c:654 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0xab821) [0x7f3cdb5e2821] numeric.c:3204 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x183601) [0x7f3cdb6ba601] vm_insnhelper.c:404 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1792b4) [0x7f3cdb6b02b4] insns.def:1015 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x17f31b) [0x7f3cdb6b631b] vm.c:1220 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1800d5) [0x7f3cdb6b70d5] vm.c:624 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_yield+0x44) [0x7f3cdb6bb274] vm.c:654 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_ary_each+0x54) [0x7f3cdb5622e4] array.c:1478 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x183601) [0x7f3cdb6ba601] vm_insnhelper.c:404 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1792b4) [0x7f3cdb6b02b4] insns.def:1015 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x17f31b) [0x7f3cdb6b631b] vm.c:1220 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_iseq_eval+0x1f0) [0x7f3cdb6bbae0] vm.c:1447 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x67bfd) [0x7f3cdb59ebfd] load.c:310 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_require_safe+0x6ef) [0x7f3cdb5a02df] load.c:619 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x183601) [0x7f3cdb6ba601] vm_insnhelper.c:404 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1792b4) [0x7f3cdb6b02b4] insns.def:1015 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x17f31b) [0x7f3cdb6b631b] vm.c:1220 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1800d5) [0x7f3cdb6b70d5] vm.c:624 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_yield+0x44) [0x7f3cdb6bb274] vm.c:654 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_ary_each+0x54) [0x7f3cdb5622e4] array.c:1478 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x183601) [0x7f3cdb6ba601] vm_insnhelper.c:404 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x1792b4) [0x7f3cdb6b02b4] insns.def:1015 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x17f31b) [0x7f3cdb6b631b] vm.c:1220 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(rb_iseq_eval_main+0xaf) [0x7f3cdb6bbbdf] vm.c:1461 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(+0x6471a) [0x7f3cdb59b71a] eval.c:204 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(ruby_exec_node+0x1d) [0x7f3cdb59c56d] eval.c:251 /usr/local/rvm/rubies/ruby-1.9.3-p0/lib/libruby.so.1.9(ruby_run_node+0x1e) [0x7f3cdb59e60e] eval.c:244 ruby() [0x4008eb] /lib64/libc.so.6(__libc_start_main+0xfd) [0x3284c1ee5d] ruby() [0x4007d9]

    Read the article

  • WebCenter Customer Spotlight: Ancestry.com

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAncestry.com Inc is the largest for-profit genealogy company in the world and it operates a network of genealogical and historical record websites focused on the U.S. and nine foreign countries, develops and markets genealogical software, and offers a wide array of genealogical related services. As of June 2012, the company provided access to more than 10 billion records, 38 million family trees, and 2 million paying subscribers. Their main business challenges were to improve time to market and agility to respond quickly to fast changing Internet waves while integrating with their existing content (4 PetaByte) and legacy systems. Ancestry.com implemented Oracle WebCenter Sites as their Web Experience Management System for their landing pages and marketing micro sites, added dynamic sections to their existing websites and integrated the existing content and legacy systems through web services. The Ancestry.com landing pages and marketing sites are now managed by the business team without any involvement of engineering resources. Managed content can quickly be added to existing pages without having to refactor the whole page and existing content (4 PetaBytes)  is now served trough Oracle WebCenter Sites without having to migrate from existing systems. Company OverviewAncestry.com Inc is a publicly traded Internet company (NASDAQ: ACOM) based in Provo, Utah, USA. The largest for-profit genealogy company in the world, it operates a network of genealogical and historical record websites focused on the U.S. and nine foreign countries, develops and markets genealogical software, and offers a wide array of genealogical related services. As of June 2012, the company provided access to more than 10 billion records, 38 million family trees, and 2 million paying subscribers. Business ChallengesAncestry main business challenge was to respond quickly to fast changing Internet waves.  Product marketing could not change Web site content without going through development. They needed dedicated developers just to support their marketing efforts. Technical Requirements Support current systems and environments - ASP.NET, MVC.NET, Java, JSP, PHP Scalable and manageable for a world wide network Marketing Requirements Easy to enter content – Without having a degree in HTML Scheduling of content – When is content visible to users Product Requirements Easy to manage content – See when content is out-of-date Rotation of content – Producing new content as old content expires Solution DeployedAncestry implemented  Oracle WebCenter Sites as their Web Experience Management System to manage their landing pages and marketing micro sites. This sites are fully managed by their business team without involvement of any engineering resources. The integration with their existing Web sites is done through Spot Management which allows the ability to add dynamic content to certain sections of a web page. The dynamic content is managed by  Oracle WebCenter Sites. The integration with the existing content (4 PetaBytes!) is done trough  a custom content provider interface which allows to mix existing content with content from  Oracle WebCenter Sites. Business ResultsAncestry.com has achieved following impressive business results: Landing pages and marketing sites are now managed by the business team without any involvement of engineering resources Managed content can quickly be added to existing pages without having to refactor the whole page Provide access to existing content (4 PetaBytes)  without having to migrate from existing systems Additional Information Ancestry Webcast Oracle WebCenter Sites

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • FBX Importer - Texture Name

    - by CmasterG
    I have a problem with the FBX SDK. I read in the data for the vertex position and the uv coordinates. It works fine, but now I want to read for each polygon to which texture it belongs, so that I can have models with multiple textures. Can anyone tell me how I can get the texture name (file name) for my polygon. My code to read in vertex position and uv coordinates is the following: int i, j, lPolygonCount = pMesh->GetPolygonCount(); FbxVector4* lControlPoints = pMesh->GetControlPoints(); int vertexId = 0; for (i = 0; i < lPolygonCount; i++) { int lPolygonSize = pMesh->GetPolygonSize(i); for (j = 0; j < lPolygonSize; j++) { int lControlPointIndex = pMesh->GetPolygonVertex(i, j); FbxVector4 pos = lControlPoints[lControlPointIndex]; current_model[vertex_index].x = pos.mData[0] - pivot_offset[0]; current_model[vertex_index].y = pos.mData[1] - pivot_offset[1]; current_model[vertex_index].z = pos.mData[2]- pivot_offset[2]; FbxVector4 vertex_normal; pMesh->GetPolygonVertexNormal(i,j, vertex_normal); current_model[vertex_index].nx = vertex_normal.mData[0]; current_model[vertex_index].ny = vertex_normal.mData[1]; current_model[vertex_index].nz = vertex_normal.mData[2]; //read in UV data FbxStringList lUVSetNameList; pMesh->GetUVSetNames(lUVSetNameList); //get lUVSetIndex-th uv set const char* lUVSetName = lUVSetNameList.GetStringAt(0); const FbxGeometryElementUV* lUVElement = pMesh->GetElementUV(lUVSetName); if(!lUVElement) continue; // only support mapping mode eByPolygonVertex and eByControlPoint if( lUVElement->GetMappingMode() != FbxGeometryElement::eByPolygonVertex && lUVElement->GetMappingMode() != FbxGeometryElement::eByControlPoint ) return; //index array, where holds the index referenced to the uv data const bool lUseIndex = lUVElement->GetReferenceMode() != FbxGeometryElement::eDirect; const int lIndexCount= (lUseIndex) ? lUVElement->GetIndexArray().GetCount() : 0; FbxVector2 lUVValue; //get the index of the current vertex in control points array int lPolyVertIndex = pMesh->GetPolygonVertex(i,j); //the UV index depends on the reference mode //int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex; int lUVIndex = pMesh->GetTextureUVIndex(i, j); lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex); current_model[vertex_index].tu = (float)lUVValue.mData[0]; current_model[vertex_index].tv = (float)lUVValue.mData[1]; vertex_index ++; } } float v1[3], v2[3], v3[3]; v1[0] = current_model[vertex_index - 3].x; v1[1] = current_model[vertex_index - 3].y; v1[2] = current_model[vertex_index - 3].z; v2[0] = current_model[vertex_index - 2].x; v2[1] = current_model[vertex_index - 2].y; v2[2] = current_model[vertex_index - 2].z; v3[0] = current_model[vertex_index - 1].x; v3[1] = current_model[vertex_index - 1].y; v3[2] = current_model[vertex_index - 1].z; collision_model->addTriangle(v1,v2,v3);

    Read the article

  • Texture displays on Android emulator but not on device

    - by Rob
    I have written a simple UI which takes an image (256x256) and maps it to a rectangle. This works perfectly on the emulator however on the phone the texture does not show, I see only a white rectangle. This is my code: public void onSurfaceCreated(GL10 gl, EGLConfig config) { byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuffer.asFloatBuffer(); vertexBuffer.put(cardshape); vertexBuffer.position(0); byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); textureBuffer = byteBuffer.asFloatBuffer(); textureBuffer.put(textureshape); textureBuffer.position(0); // Set the background color to black ( rgba ). gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Enable Smooth Shading, default not really needed. gl.glShadeModel(GL10.GL_SMOOTH); // Depth buffer setup. gl.glClearDepthf(1.0f); // Enables depth testing. gl.glEnable(GL10.GL_DEPTH_TEST); // The type of depth testing to do. gl.glDepthFunc(GL10.GL_LEQUAL); // Really nice perspective calculations. gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); gl.glEnable(GL10.GL_TEXTURE_2D); loadGLTexture(gl); } public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glDisable(GL10.GL_DEPTH_TEST); gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glLoadIdentity(); gl.glTranslatef(card.x, card.y, 0.0f); gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } public void onSurfaceChanged(GL10 gl, int width, int height) { // Sets the current view port to the new size. gl.glViewport(0, 0, width, height); // Select the projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); // Reset the projection matrix gl.glLoadIdentity(); // Calculate the aspect ratio of the window GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f); // Select the modelview matrix gl.glMatrixMode(GL10.GL_MODELVIEW); // Reset the modelview matrix gl.glLoadIdentity(); } public int[] texture = new int[1]; public void loadGLTexture(GL10 gl) { // loading texture Bitmap bitmap; bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image); // generate one texture pointer gl.glGenTextures(0, texture, 0); //adds texture id to texture array // ...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now // create nearest filtered texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); // Use Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); // Clean up bitmap.recycle(); } As per many other similar issues and resolutions on the web i have tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and putting them in the raw folder.. all of which didn't help. I'm not really sure what else to try..

    Read the article

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • If the model is validating the data, shouldn't it throw exceptions on bad input?

    - by Carlos Campderrós
    Reading this SO question it seems that throwing exceptions for validating user input is frowned upon. But who should validate this data? In my applications, all validations are done in the business layer, because only the class itself really knows which values are valid for each one of its properties. If I were to copy the rules for validating a property to the controller, it is possible that the validation rules change and now there are two places where the modification should be made. Is my premise that validation should be done on the business layer wrong? What I do So my code usually ends up like this: <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if (mb_strlen($n) == 0) { throw new ValidationException("Name cannot be empty"); } $this->name = $n; } public function setAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { throw new ValidationException("Age $a is not valid"); } $a = (int)$a; } if ($a < 0 || $a > 150) { throw new ValidationException("Age $a is out of bounds"); } $this->age = $a; } // other getters, setters and methods } In the controller, I just pass the input data to the model, and catch thrown exceptions to show the error(s) to the user: <?php $person = new Person(); $errors = array(); // global try for all exceptions other than ValidationException try { // validation and process (if everything ok) try { $person->setAge($_POST['age']); } catch (ValidationException $e) { $errors['age'] = $e->getMessage(); } try { $person->setName($_POST['name']); } catch (ValidationException $e) { $errors['name'] = $e->getMessage(); } ... } catch (Exception $e) { // log the error, send 500 internal server error to the client // and finish the request } if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } Is this a bad methodology? Alternate method Should maybe I create methods for isValidAge($a) that return true/false and then call them from the controller? <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if ($this->isValidName($n)) { $this->name = $n; } else { throw new Exception("Invalid name"); } } public function setAge($a) { if ($this->isValidAge($a)) { $this->age = $a; } else { throw new Exception("Invalid age"); } } public function isValidName($n) { $n = trim($n); if (mb_strlen($n) == 0) { return false; } return true; } public function isValidAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { return false; } $a = (int)$a; } if ($a < 0 || $a > 150) { return false; } return true; } // other getters, setters and methods } And the controller will be basically the same, just instead of try/catch there are now if/else: <?php $person = new Person(); $errors = array(); if ($person->isValidAge($age)) { $person->setAge($age); } catch (Exception $e) { $errors['age'] = "Invalid age"; } if ($person->isValidName($name)) { $person->setName($name); } catch (Exception $e) { $errors['name'] = "Invalid name"; } ... if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } So, what should I do? I'm pretty happy with my original method, and my colleagues to whom I have showed it in general have liked it. Despite this, should I change to the alternate method? Or am I doing this terribly wrong and I should look for another way?

    Read the article

  • Pro/con of using Angular directives for complex form validation/ GUI manipulation

    - by tengen
    I am building a new SPA front end to replace an existing enterprise's legacy hodgepodge of systems that are outdated and in need of updating. I am new to angular, and wanted to see if the community could give me some perspective. I'll state my problem, and then ask my question. I have to generate several series of check boxes based on data from a .js include, with data like this: $scope.fieldMappings.investmentObjectiveMap = [ {'id':"CAPITAL PRESERVATION", 'name':"Capital Preservation"}, {'id':"STABLE", 'name':"Moderate"}, {'id':"BALANCED", 'name':"Moderate Growth"}, // etc {'id':"NONE", 'name':"None"} ]; The checkboxes are created using an ng-repeat, like this: <div ng-repeat="investmentObjective in fieldMappings.investmentObjectiveMap"> ... </div> However, I needed the values represented by the checkboxes to map to a different model (not just 2-way-bound to the fieldmappings object). To accomplish this, I created a directive, which accepts a destination array destarray which is eventually mapped to the model. I also know I need to handle some very specific gui controls, such as unchecking "None" if anything else gets checked, or checking "None" if everything else gets unchecked. Also, "None" won't be an option in every group of checkboxes, so the directive needs to be generic enough to accept a validation function that can fiddle with the checked state of the checkbox group's inputs based on what's already clicked, but smart enough not to break if there is no option called "NONE". I started to do that by adding an ng-click which invoked a function in the controller, but in looking around Stack Overflow, I read people saying that its bad to put DOM manipulation code inside your controller - it should go in directives. So do I need another directive? So far: (html): <input my-checkbox-group type="checkbox" fieldobj="investmentObjective" ng-click="validationfunc()" validationfunc="clearOnNone()" destarray="investor.investmentObjective" /> Directive code: .directive("myCheckboxGroup", function () { return { restrict: "A", scope: { destarray: "=", // the source of all the checkbox values fieldobj: "=", // the array the values came from validationfunc: "&" // the function to be called for validation (optional) }, link: function (scope, elem, attrs) { if (scope.destarray.indexOf(scope.fieldobj.id) !== -1) { elem[0].checked = true; } elem.bind('click', function () { var index = scope.destarray.indexOf(scope.fieldobj.id); if (elem[0].checked) { if (index === -1) { scope.destarray.push(scope.fieldobj.id); } } else { if (index !== -1) { scope.destarray.splice(index, 1); } } }); } }; }) .js controller snippet: .controller( 'SuitabilityCtrl', ['$scope', function ( $scope ) { $scope.clearOnNone = function() { // naughty jQuery DOM manipulation code that // looks at checkboxes and checks/unchecks as needed }; The above code is done and works fine, except the naughty jquery code in clearOnNone(), which is why I wrote this question. And here is my question: after ALL this, I think to myself - I could be done already if I just manually handled all this GUI logic and validation junk with jQuery written in my controller. At what point does it become foolish to write these complicated directives that future developers will have to puzzle over more than if I had just written jQuery code that 99% of us would understand with a glance? How do other developers draw the line? I see this all over Stack Overflow. For example, this question seems like it could be answered with a dozen lines of straightforward jQuery, yet he has opted to do it the angular way, with a directive and a partial... it seems like a lot of work for a simple problem. Specifically, I suppose I would like to know: how SHOULD I be writing the code that checks whether "None" has been selected (if it exists as an option in this group of checkboxes), and then check/uncheck the other boxes accordingly? A more complex directive? I can't believe I'm the only developer that is having to implement code that is more complex than needed just to satisfy an opinionated framework.

    Read the article

  • How to place rooms proceduraly (rule based) on in a game word

    - by gardian06
    I am trying to design the algorithm for my level generation which is a rule driven system. I have created all the rules for the system. I have taken care to insure that all rooms make sense in a grid type setup. for example: these rooms could make this configuration The logic flow code that I have so far Door{ Vector3 position; POD orient; // 5 possible values (up is not an option) bool Open; } Room{ String roomRule; Vector3 roomPos; Vector3 dimensions; POD roomOrient; // 4 possible values List doors<Door>; } LevelManager{ float scale = 18f; List usedRooms<Room>; List openDoors<Door> bool Grid[][][]; Room CreateRoom(String rule, Vector3 position, POD Orient){ place recieved values based on rule fill in other data } Vector3 getDimenstions(String rule){ return dimensions of the room } RotateRoom(POD rotateAmount){ rotate all items in the room } MoveRoom(Room toBeMoved, POD orientataion, float distance){ move the position of the room based on inputs } GenerateMap(Vector3 size, Vector3 start, Vector3 end){ Grid = array[size.y][size.x][size.z]; Room floatingRoom; floatingRoom = Room.CreateRoom(S01, start, rand(4)); usedRooms.Add(floatingRoom); for each Door in floatingRoom.doors{ openDoors.Add(door); } // fill used grid spaces floatingRoom = Room.CreateRoom(S02, end, rand(4); usedRooms.Add(floatingRoom); for each Door in floatingRoom.doors{ openDoors.Add(door); } Vector3 nRoomLocation; Door workingDoor; string workingRoom; // fill used grid spaces // pick random door on the openDoors list workingDoor = /*randomDoor*/ // get a random rule nRoomLocation = workingDoor.position; // then I'm lost } } I know that I have to make sure for convergence (namely the end is reachable), and to do this until there are no more doors on the openDoors list. right now I am simply trying to get this to work in 2D (there are rules that introduce 3D), but I am working on a presumption that a rigorous algorithm can be trivially extended to 3D. EDIT: my thought pattern so far is to take an existing open door and then pick a random room (restrictions can be put in later) place that room's center at the doors location move the room in the direction of the doors orientation half the rooms dimension w/respect to that axis then test against the 3D array to see if all the grid points are open, or have been used, or if there is even space to put the room (caseEdge) if caseEdge (which can also occur in between rooms) then put the door on a toBeClosed list, and remove it from the open list (placing a wall or something there). then to do some kind of test that both the start, and the goal are connected, and reachable from each other (each room has nodes for AI, but I don't want to "have" to pull those out to accomplish this). but this logic has the problem for say the U, or L shaped rooms in my example, and then I also have a problem conceptually if the room needs to be rotated.

    Read the article

  • Mysterious dbboon folder with proxy.php file on my godaddy account

    - by Paul
    When doing some web maintenance today, I noticed a strange new folder on my GoDaddy hosting account at the root level named "dbboon", with a single file inside, called proxy.php. It's code is listed below, and seems to be some sort of proxy function. I was kind of troubled because I didn't put it there. I googled all this to learn more, but didn't find anything, except for the proxy file happened to be also stored at pastebin.com: http://pastebin.com/PQsSPbCr I called GoDaddy and they confirmed that it belonged to them, said it was put there by their advanced hosting group for testing purposes but didn't have any more information. I thought this was all really weird: why would they put something in my folder without giving me a heads-up, and why would they need to do something like this? anybody know anything about this? <?php $version = '1.2'; if(isset($_GET['dbboon_version'])) { echo '{"version":"' . $version . '"}'; exit; } function dbboon_parseHeaders($subject) { global $version; $subject = trim($subject); $parsed = Array(); $len = strlen($subject); $position = $field = 0; $position = strpos($subject, "\r\n") + 2; while(isset($subject[$position])) { $nextC = strpos($subject, ':', $position); $fieldName = substr($subject, $position, ($nextC-$position)); $position += strlen($fieldName) + 1; $fieldValue = NULL; while(1) { $nextCrlf = strpos($subject, "\r\n", $position - 1); if(FALSE === $nextCrlf) { $t = substr($subject, $position); $position = $len; } else { $t = substr($subject, $position, $nextCrlf-$position); $position += strlen($t) + 2; } $fieldValue .= $t; if(!isset($subject[$position]) || (' ' != $subject[$position] && "\t" != $subject[$position])) { break; } } $parsed[strtolower($fieldName)] = trim($fieldValue); if($position > $len) { echo '{"result":false,"error":{"code":4,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; } } return $parsed; } if(!function_exists('http_build_query')) { function http_build_query($data, $prefix = '', $sep = '', $key = '') { $ret = Array(); foreach((array) $data as $k => $v) { if(is_int($k) && NULL != $prefix) { $k = urlencode($prefix . $k); } if(!empty($key) || $key === 0) { $k = $key . '[' . urlencode($k) . ']'; } if(is_array($v) || is_object($v)) { array_push($ret, http_build_query($v, '', $sep, $k)); } else { array_push($ret, $k . '=' . urlencode($v)); } } if(empty($sep)) { $sep = '&'; } return implode($sep, $ret); } } $host = 'dbexternalsubscriber.secureserver.net'; $get = http_build_query($_GET); $post = http_build_query($_POST); $url = $get ? "?$get" : ''; $fp = fsockopen($host, 80, $errno, $errstr); if($fp) { $payload = "POST /embed/$url HTTP/1.1\r\n"; $payload .= "Host: $host\r\n"; $payload .= "Content-Length: " . strlen($post) . "\r\n"; $payload .= "Content-Type: application/x-www-form-urlencoded\r\n"; $payload .= "Connection: Close\r\n\r\n"; $payload .= $post; fwrite($fp, $payload); $httpCode = NULL; $response = NULL; $timeout = time() + 15; do { while($line = fgets($fp)) { $response .= $line; if(!trim($line)) { break; } } } while($timeout > time() && NULL === $response); $headers = dbboon_parseHeaders($response); if(isset($headers['transfer-encoding']) && 'chunked' === $headers['transfer-encoding']) { do { $cSize = $read = hexdec(trim(fgets($fp))); while($read > 0) { $buff = fread($fp, $read); $read -= strlen($buff); $response .= $buff; } $response .= fgets($fp); } while($cSize > 0); } else { preg_match('/Content-Length:\s([0-9]+)\r\n/msi', $response, $match); if(!isset($match[1])) { echo '{"result":false,"error":{"code":3,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; } else { while($match[1] > 0) { $buff = fread($fp, $match[1]); $match[1] -= strlen($buff); $response .= $buff; } } } fclose($fp); if(!$pos = strpos($response, "\r\n\r\n")) { echo '{"result":false,"error":{"code":2,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; } echo substr($response, $pos + 4); } else { echo '{"result":false,"error":{"code":1,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; }

    Read the article

  • [OpenGL ES - Android] Better way to generate tiles

    - by Inoe
    Hi ! I'll start by saying that i'm REALLY new to OpenGL ES (I started yesterday =), but I do have some Java and other languages experience. I've looked a lot of tutorials, of course Nehe's ones and my work is mainly based on that. As a test, I started creating a "tile generator" in order to create a small Zelda-like game (just moving a dude in a textured square would be awsome :p). So far, I have achieved a working tile generator, I define a char map[][] array to store wich tile is on : private char[][] map = { {0, 0, 20, 11, 11, 11, 11, 4, 0, 0}, {0, 20, 16, 12, 12, 12, 12, 7, 4, 0}, {20, 16, 17, 13, 13, 13, 13, 9, 7, 4}, {21, 24, 18, 14, 14, 14, 14, 8, 5, 1}, {21, 22, 25, 15, 15, 15, 15, 6, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {26, 0, 0, 0, 0, 0, 0, 3, 2, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1} }; It's working but I'm no happy with it, I'm sure there is a beter way to do those things : 1) Loading Textures : I create an ugly looking array containing the tiles I want to use on that map : private int[] textures = { R.drawable.herbe, //0 R.drawable.murdroite_haut, //1 R.drawable.murdroite_milieu, //2 R.drawable.murdroite_bas, //3 R.drawable.angledroitehaut_haut, //4 R.drawable.angledroitehaut_milieu, //5 }; (I cutted this on purpose, I currently load 27 tiles) All of theses are stored in the drawable folder, each one is a 16*16 tile. I then use this array to generate the textures and store them in a HashMap for a later use : int[] tmp_tex = new int[textures.length]; gl.glGenTextures(textures.length, tmp_tex, 0); texturesgen = tmp_tex; //Store the generated names in texturesgen for(int i=0; i < textures.length; i++) { //Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), textures[i]); InputStream is = context.getResources().openRawResource(textures[i]); Bitmap bitmap = null; try { //BitmapFactory is an Android graphics utility for images bitmap = BitmapFactory.decodeStream(is); } finally { //Always clear and close try { is.close(); is = null; } catch (IOException e) { } } // Get a new texture name // Load it up this.textureMap.put(new Integer(textures[i]),new Integer(i)); int tex = tmp_tex[i]; gl.glBindTexture(GL10.GL_TEXTURE_2D, tex); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } I'm quite sure there is a better way to handle that... I just was unable to figure it. If someone has an idea, i'm all ears. 2) Drawing the tiles What I did was create a single square and a single texture map : /** The initial vertex definition */ private float vertices[] = { -1.0f, -1.0f, 0.0f, //Bottom Left 1.0f, -1.0f, 0.0f, //Bottom Right -1.0f, 1.0f, 0.0f, //Top Left 1.0f, 1.0f, 0.0f //Top Right }; private float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; Then, in my draw function, I loop through the map to define the texture to use (after pointing to and enabling the buffers) : for(int y = 0; y < Y; y++){ for(int x = 0; x < X; x++){ tile = map[y][x]; try { //Get the texture from the HashMap int textureid = ((Integer) this.textureMap.get(new Integer(textures[tile]))).intValue(); gl.glBindTexture(GL10.GL_TEXTURE_2D, this.texturesgen[textureid]); } catch(Exception e) { return; } //Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glTranslatef(2.0f, 0.0f, 0.0f); //A square takes 2x so I move +2x before drawing the next tile } gl.glTranslatef(-(float)(2*X), -2.0f, 0.0f); //Go back to the begining of the map X-wise and move 2y down before drawing the next line } This works great by I really think that on a 1000*1000 or more map, it will be lagging as hell (as a reminder, this is a typical Zelda world map : http://vgmaps.com/Atlas/SuperNES/LegendOfZelda-ALinkToThePast-LightWorld.png ). I've read things about Vertex Buffer Object and DisplayList but I couldn't find a good tutorial and nodoby seems to be OK on wich one is the best / has the better support (T1 and Nexus One are ages away). I think that's it, I've putted a lot of code but I think it helps. Thanks in advance !

    Read the article

  • jQuery and Windows Azure

    - by Stephen Walther
    The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail. Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records. There are five steps that you must complete to host the jQuery application: Sign up for Windows Azure Create a Hosted Service Install the Windows Azure Tools for Visual Studio Create a Windows Azure Cloud Service Deploy the Cloud Service Sign Up for Windows Azure Go to http://www.microsoft.com/windowsazure/ and click the Sign up Now button. Select one of the offers. I selected the Introductory Special offer because it is free and I just wanted to experiment with Windows Azure for the purposes of this blog entry.     To sign up, you will need a Windows Live ID and you will need to enter a credit card number. After you finish the sign up process, you will receive an email that explains how to activate your account. Accessing the Developer Portal After you create your account and your account is activated, you can access the Windows Azure developer portal by visiting the following URL: http://windows.azure.com/ When you first visit the developer portal, you will see the one project that you created when you set up your Windows Azure account (In a fit of creativity, I named my project StephenWalther).     Creating a New Windows Azure Hosted Service Before you can host an application in the cloud, you must first add a hosted service to your project. Click your project on the summary page and click the New Service link. You are presented with the option of creating either a new Storage Account or a new Hosted Services.     Because we have code that we want to run in the cloud – the WCF Service -- we want to select the Hosted Services option. After you select this option, you must provide a name and description for your service. This information is used on the developer portal so you can distinguish your services.     When you create a new hosted service, you must enter a unique name for your service (I selected jQueryApp) and you must select a region for this service (I selected Anywhere US). Click the Create button to create the new hosted service.   Install the Windows Azure Tools for Visual Studio We’ll use Visual Studio to create our jQuery project. Before you can use Visual Studio with Windows Azure, you must first install the Windows Azure Tools for Visual Studio. Go to http://www.microsoft.com/windowsazure/ and click the Get Tools and SDK button. The Windows Azure Tools for Visual Studio works with both Visual Studio 2008 and Visual Studio 2010.   Installation of the Windows Azure Tools for Visual Studio is painless. You just need to check some agreement checkboxes and click the Next button a few times and installation will begin:   Creating a Windows Azure Application After you install the Windows Azure Tools for Visual Studio, you can choose to create a Windows Azure Cloud Service by selecting the menu option File, New Project and selecting the Windows Azure Cloud Service project template. I named my new Cloud Service with the name jQueryApp.     Next, you need to select the type of Cloud Service project that you want to create from the New Cloud Service Project dialog.   I selected the C# ASP.NET Web Role option. Alternatively, I could have picked the ASP.NET MVC 2 Web Role option if I wanted to use jQuery with ASP.NET MVC or even the CGI Web Role option if I wanted to use jQuery with PHP. After you complete these steps, you end up with two projects in your Visual Studio solution. The project named WebRole1 represents your ASP.NET application and we will use this project to create our jQuery application. Creating the jQuery Application in the Cloud We are now ready to create the jQuery application. We’ll create a super simple application that displays a list of records retrieved from a WCF service (hosted in the cloud). Create a new page in the WebRole1 project named Default.htm and add the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> var products = [ {name:"Milk", price:4.55}, {name:"Yogurt", price:2.99}, {name:"Steak", price:23.44} ]; $("#productTemplate").render(products).appendTo("#productContainer"); </script> </body> </html> The jQuery code in this page simply displays a list of products by using a template. I am using a jQuery template to format each product. You can learn more about using jQuery templates by reading the following blog entry by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2010/05/07/jquery-templates-and-data-linking-and-microsoft-contributing-to-jquery.aspx You can test whether the Default.htm page is working correctly by running your application (hit the F5 key). The first time that you run your application, a database is set up on your local machine to simulate cloud storage. You will see the following dialog: If the Default.htm page works as expected, you should see the list of three products: Adding an Ajax-Enabled WCF Service In the previous section, we created a simple jQuery application that displays an array by using a template. The application is a little too simple because the data is static. In this section, we’ll modify the page so that the data is retrieved from a WCF service instead of an array. First, we need to add a new Ajax-enabled WCF Service to the WebRole1 project. Select the menu option Project, Add New Item and select the Ajax-enabled WCF Service project item. Name the new service ProductService.svc. Modify the service so that it returns a static collection of products. The final code for the ProductService.svc should look like this: using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Activation; namespace WebRole1 { public class Product { public string name { get; set; } public decimal price { get; set; } } [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService { [OperationContract] public IList<Product> SelectProducts() { var products = new List<Product>(); products.Add(new Product {name="Milk", price=4.55m} ); products.Add(new Product { name = "Yogurt", price = 2.99m }); products.Add(new Product { name = "Steak", price = 23.44m }); return products; } } }   In real life, you would want to retrieve the list of products from storage instead of a static array. We are being lazy here. Next you need to modify the Default.htm page to use the ProductService.svc. The jQuery script in the following updated Default.htm page makes an Ajax call to the WCF service. The data retrieved from the ProductService.svc is displayed in the client template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> $.post("ProductService.svc/SelectProducts", function (results) { var products = results["d"]; $("#productTemplate").render(products).appendTo("#productContainer"); }); </script> </body> </html>   Deploying the jQuery Application to the Cloud Now that we have created our jQuery application, we are ready to deploy our application to the cloud so that the whole world can use it. Right-click your jQueryApp project in the Solution Explorer window and select the Publish menu option. When you select publish, your application and your application configuration information is packaged up into two files named jQueryApp.cspkg and ServiceConfiguration.cscfg. Visual Studio opens the directory that contains the two files. In order to deploy these files to the Windows Azure cloud, you must upload these files yourself. Return to the Windows Azure Developers Portal at the following address: http://windows.azure.com/ Select your project and select the jQueryApp service. You will see a mysterious cube. Click the Deploy button to upload your application.   Next, you need to browse to the location on your hard drive where the jQueryApp project was published and select both the packaged application and the packaged application configuration file. Supply the deployment with a name and click the Deploy button.     While your application is in the process of being deployed, you can view a progress bar.     Running the jQuery Application in the Cloud Finally, you can run your jQuery application in the cloud by clicking the Run button.   It might take several minutes for your application to initialize (go grab a coffee). After WebRole1 finishes initializing, you can navigate to the following URL to view your live jQuery application in the cloud: http://jqueryapp.cloudapp.net/default.htm The page is hosted on the Windows Azure cloud and the WCF service executes every time that you request the page to retrieve the list of products. Summary Because we started from scratch, we needed to complete several steps to create and deploy our jQuery application to the Windows Azure cloud. We needed to create a Windows Azure account, create a hosted service, install the Windows Azure Tools for Visual Studio, create the jQuery application, and deploy it to the cloud. Now that we have finished this process once, modifying our existing cloud application or creating a new cloud application is easy. jQuery and Windows Azure work nicely together. We can take advantage of jQuery to build applications that run in the browser and we can take advantage of Windows Azure to host the backend services required by our jQuery application. The big benefit of Windows Azure is that it enables us to scale. If, all of the sudden, our jQuery application explodes in popularity, Windows Azure enables us to easily scale up to meet the demand. We can handle anything that the Internet might throw at us.

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >