Search Results

Search found 56342 results on 2254 pages for 'versant object database'.

Page 175/2254 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Python C API return more than one value / object without building a tuple [migrated]

    - by Grisu
    I got the following problem. I have written a C-Extension to Python(2.7 / 3.2) to interface a self written software library. Unfortunately I need to return two values from the function where the last one is optional. In Python I tried def func(x,y): return x+y, x-y test = func(13,4) but test is a tuple. If I write test1,test2 = func(13,4) I got both values separated. Is there a possibility to return only one value without unpacking the tuple, i.e. the second(,.. third, ..fourth) value gets neglected? And if such a solution existst, how does it look for the C-API? Because return Py_BuildValue("ii",x+y,x-y); results in a tuple as well.

    Read the article

  • Python C API return more than one value / object

    - by Grisu
    I got the following problem. I have written a C-Extension to Python to interface a self written software library. Unfortunately I need to return two values from the C function where the last one is optional. In Python the equivalent is def func(x,y): return x+y, x-y test = func(13,4) #only the first value is used In my C extension I use return Py_BuildValue("ii",x+y,x-y); which results in a tuple. If I now try to access the return value from Python via test2 = cfunc(13,4) print(test2) I got a tuple instead of only the first return value. How is possible to build the same behavior as in Python from C Extension?

    Read the article

  • Getting a sphere to roll down a .FBX object Unity3D/C#

    - by Timothy Williams
    I'm working on a little ramp and ball game in Unity, I modeled the ramp outside Unity and exported it to a .FBX file, then I imported the ramp in to Unity. I set up the ball and ramp, both have Rigidbodies, Ramp is set to isKinematic = true, yet when I play the game the ball just falls right through the ramp and hits the floor below it fine. So it's something wrong with the ramp. Am I doing something wrong? Are .FBX files unable to apply physics? Thanks, Tim.

    Read the article

  • SQL SERVER Spatial Database Queries What About BLOB T-SQL Tuesday #006

    Michael Coles is one of the most interesting book authors I have ever met. He has a flair of writing complex stuff in a simple language. There are a very few people like that. I really enjoyed reading his recent book, Expert SQL Server 2008 Encryption. I strongly suggest taking a look at it. This [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • No SQLCompact Edition Support IN VS 2013

    - by James Izzard
    Firstly apologies if this is a poor question - I am an engineer not a programmer. I have spent time moving from Visual Basic to C#. I have started C#/SQL tutorials. I have noticed VS 2013 has stopped supporting the compact edition database normally used for standalone desktop apps. Somebody has kindly written a plugin to re-implement support. I have also noticed a belief circulating that SQLite is to replace the compact edition. Would anybody be able to advise if this was accurate - I am slightly confused as to which database is best suited for desktop app development inside VS 2013. Any comment greatly appreciated. Cheers James

    Read the article

  • Decompressing a very large serialized object and managing memory

    - by Mike_G
    I have an object that contains tons of data used for reports. In order to get this object from the server to the client I first serialize the object in a memory stream, then compress it using the Gzip stream of .NET. I then send the compressed object as a byte[] to the client. The problem is on some clients, when they get the byte[] and try to decompress and deserialize the object, a System.OutOfMemory exception is thrown. Ive read that this exception can be caused by new() a bunch of objects, or holding on to a bunch of strings. Both of these are happening during the deserialization process. So my question is: How do I prevent the exception (any good strategies)? The client needs all of the data, and ive trimmed down the number of strings as much as i can. edit: here is the code i am using to serialize/compress (implemented as extension methods) public static byte[] SerializeObject<T>(this object obj, T serializer) where T: XmlObjectSerializer { Type t = obj.GetType(); if (!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes; using (MemoryStream stream = new MemoryStream()) { serializer.WriteObject(stream, obj); initialBytes = stream.ToArray(); } return initialBytes; } public static byte[] CompressObject<T>(this object obj, T serializer) where T : XmlObjectSerializer { Type t = obj.GetType(); if(!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes = obj.SerializeObject(serializer); byte[] compressedBytes; using (MemoryStream stream = new MemoryStream(initialBytes)) { using (MemoryStream output = new MemoryStream()) { using (GZipStream zipper = new GZipStream(output, CompressionMode.Compress)) { Pump(stream, zipper); } compressedBytes = output.ToArray(); } } return compressedBytes; } internal static void Pump(Stream input, Stream output) { byte[] bytes = new byte[4096]; int n; while ((n = input.Read(bytes, 0, bytes.Length)) != 0) { output.Write(bytes, 0, n); } } And here is my code for decompress/deserialize: public static T DeSerializeObject<T,TU>(this byte[] serializedObject, TU deserializer) where TU: XmlObjectSerializer { using (MemoryStream stream = new MemoryStream(serializedObject)) { return (T)deserializer.ReadObject(stream); } } public static T DecompressObject<T, TU>(this byte[] compressedBytes, TU deserializer) where TU: XmlObjectSerializer { byte[] decompressedBytes; using(MemoryStream stream = new MemoryStream(compressedBytes)) { using(MemoryStream output = new MemoryStream()) { using(GZipStream zipper = new GZipStream(stream, CompressionMode.Decompress)) { ObjectExtensions.Pump(zipper, output); } decompressedBytes = output.ToArray(); } } return decompressedBytes.DeSerializeObject<T, TU>(deserializer); } The object that I am passing is a wrapper object, it just contains all the relevant objects that hold the data. The number of objects can be a lot (depending on the reports date range), but ive seen as many as 25k strings. One thing i did forget to mention is I am using WCF, and since the inner objects are passed individually through other WCF calls, I am using the DataContract serializer, and all my objects are marked with the DataContract attribute.

    Read the article

  • Keeping an enum and a table in sync

    - by MPelletier
    I'm making a program that will post data to a database, and I've run into a pattern that I'm sure is familiar: A short table of most-likely (very strongly likely) fixed values that serve as an enum. So suppose the following table called Status: Status Id Description -------------- 0 Unprocessed 1 Pending 2 Processed 3 Error In my program I need to determine a status Id for another table, or possibly update a record with a new status Id. I could hardcode the status Id's in an enum and hope no one ever changes the database. Or I could pre-fetch the values based on the description (thus hardcoding that instead). What would be the correct approach to keep these two, enum and table, synced?

    Read the article

  • projection / view matrix: the object is bigger than it should and depth does not affect vertices

    - by Francesco Noferi
    I'm currently trying to write a C 3D software rendering engine from scratch just for fun and to have an insight on what OpenGL does behind the scene and what 90's programmers had to do on DOS. I have written my own matrix library and tested it without noticing any issues, but when I tried projecting the vertices of a simple 2x2 cube at 0,0 as seen by a basic camera at 0,0,10, the cube seems to appear way bigger than the application's window. If I scale the vertices' coordinates down by 8 times I can see a proper cube centered on the screen. This cube doesn't seem to be in perspective: wheen seen from the front, the back vertices pe rfectly overlap with the front ones, so I'm quite sure it's not correct. this is how I create the view and projection matrices (vec4_initd initializes the vectors with w=0, vec4_initw initializes the vectors with w=1): void mat4_lookatlh(mat4 *m, const vec4 *pos, const vec4 *target, const vec4 *updirection) { vec4 fwd, right, up; // fwd = norm(pos - target) fwd = *target; vec4_sub(&fwd, pos); vec4_norm(&fwd); // right = norm(cross(updirection, fwd)) vec4_cross(updirection, &fwd, &right); vec4_norm(&right); // up = cross(right, forward) vec4_cross(&fwd, &right, &up); // orientation and translation matrices combined vec4_initd(&m->a, right.x, up.x, fwd.x); vec4_initd(&m->b, right.y, up.y, fwd.y); vec4_initd(&m->c, right.z, up.z, fwd.z); vec4_initw(&m->d, -vec4_dot(&right, pos), -vec4_dot(&up, pos), -vec4_dot(&fwd, pos)); } void mat4_perspectivefovrh(mat4 *m, float fovdegrees, float aspectratio, float near, float far) { float h = 1.f / tanf(ftoradians(fovdegrees / 2.f)); float w = h / aspectratio; vec4_initd(&m->a, w, 0.f, 0.f); vec4_initd(&m->b, 0.f, h, 0.f); vec4_initw(&m->c, 0.f, 0.f, -far / (near - far)); vec4_initd(&m->d, 0.f, 0.f, (near * far) / (near - far)); } this is how I project my vertices: void device_project(device *d, const vec4 *coord, const mat4 *transform, int *projx, int *projy) { vec4 result; mat4_mul(transform, coord, &result); *projx = result.x * d->w + d->w / 2; *projy = result.y * d->h + d->h / 2; } void device_rendervertices(device *d, const camera *camera, const mesh meshes[], int nmeshes, const rgba *color) { int i, j; mat4 view, projection, world, transform, projview; mat4 translation, rotx, roty, rotz, transrotz, transrotzy; int projx, projy; // vec4_unity = (0.f, 1.f, 0.f, 0.f) mat4_lookatlh(&view, &camera->pos, &camera->target, &vec4_unity); mat4_perspectivefovrh(&projection, 45.f, (float)d->w / (float)d->h, 0.1f, 1.f); for (i = 0; i < nmeshes; i++) { // world matrix = translation * rotz * roty * rotx mat4_translatev(&translation, meshes[i].pos); mat4_rotatex(&rotx, ftoradians(meshes[i].rotx)); mat4_rotatey(&roty, ftoradians(meshes[i].roty)); mat4_rotatez(&rotz, ftoradians(meshes[i].rotz)); mat4_mulm(&translation, &rotz, &transrotz); // transrotz = translation * rotz mat4_mulm(&transrotz, &roty, &transrotzy); // transrotzy = transrotz * roty = translation * rotz * roty mat4_mulm(&transrotzy, &rotx, &world); // world = transrotzy * rotx = translation * rotz * roty * rotx // transform matrix mat4_mulm(&projection, &view, &projview); // projview = projection * view mat4_mulm(&projview, &world, &transform); // transform = projview * world = projection * view * world for (j = 0; j < meshes[i].nvertices; j++) { device_project(d, &meshes[i].vertices[j], &transform, &projx, &projy); device_putpixel(d, projx, projy, color); } } } this is how the cube and camera are initialized: // test mesh cube = &meshlist[0]; mesh_init(cube, "Cube", 8); cube->rotx = 0.f; cube->roty = 0.f; cube->rotz = 0.f; vec4_initw(&cube->pos, 0.f, 0.f, 0.f); vec4_initw(&cube->vertices[0], -1.f, 1.f, 1.f); vec4_initw(&cube->vertices[1], 1.f, 1.f, 1.f); vec4_initw(&cube->vertices[2], -1.f, -1.f, 1.f); vec4_initw(&cube->vertices[3], -1.f, -1.f, -1.f); vec4_initw(&cube->vertices[4], -1.f, 1.f, -1.f); vec4_initw(&cube->vertices[5], 1.f, 1.f, -1.f); vec4_initw(&cube->vertices[6], 1.f, -1.f, 1.f); vec4_initw(&cube->vertices[7], 1.f, -1.f, -1.f); // main camera vec4_initw(&maincamera.pos, 0.f, 0.f, 10.f); maincamera.target = vec4_zerow; and, just to be sure, this is how I compute matrix multiplications: void mat4_mul(const mat4 *m, const vec4 *va, vec4 *vb) { vb->x = m->a.x * va->x + m->b.x * va->y + m->c.x * va->z + m->d.x * va->w; vb->y = m->a.y * va->x + m->b.y * va->y + m->c.y * va->z + m->d.y * va->w; vb->z = m->a.z * va->x + m->b.z * va->y + m->c.z * va->z + m->d.z * va->w; vb->w = m->a.w * va->x + m->b.w * va->y + m->c.w * va->z + m->d.w * va->w; } void mat4_mulm(const mat4 *ma, const mat4 *mb, mat4 *mc) { mat4_mul(ma, &mb->a, &mc->a); mat4_mul(ma, &mb->b, &mc->b); mat4_mul(ma, &mb->c, &mc->c); mat4_mul(ma, &mb->d, &mc->d); }

    Read the article

  • How do I close a database connection being used to produce a streaming result in a WCF service?

    - by Dan
    I have been unable to find any documentation on properly closing database connections in WCF service operations. I have a service that returns a streamed response through the following method. public virtual Message GetData() { string sqlString = BuildSqlString(); SqlConnection conn = Utils.GetConnection(); SqlCommand cmd = new SqlCommand(sqlString, conn); XmlReader xr = cmd.ExecuteXmlReader(); Message msg = Message.CreateMessage( OperationContext.Current.IncomingMessageVersion, GetResponseAction(), xr); return msg; } I cannot close the connection within the method or the streaming of the response message will be terminated. Since control returns to the WCF system after the completion of that method, I don't know how I can close that connection afterwards. Any suggestions or pointers to additional documentation would be appreciated.

    Read the article

  • Google Books - Online Literacy Database

    The idea of Google Books was first conceived in 2002 when a small group of Google programmers started pondering the question of how many man hours it would take to scan every single book ever written... [Author: Chris Holgate - Computers and Internet - March 29, 2010]

    Read the article

  • Storing lots of large strings with frequent "appends" and few reads

    - by Thiago Moraes
    In my current project, I need to store a very long ASCII string to each instance of a given object. This string will receive an 2 appends per minute and will not be retrieved so frequently. The worst case scenario is a 5-10MB string. I'll have thousands of instances of my object and I'm worried that storing all those strings in the filesystem would not be optimal, but I can't think of a better solution. Can anyone suggest an alternative? Maybe a key-value store? In this case, which one? Any other thoughts?

    Read the article

  • Render Ruby object to interactive html

    - by AvImd
    I am developing a tool that discovers network services enabled on host and writes short summary on them like this: init,1 +-- login,1560 -- +-- bash,1629 +-- nc,12137 -lup 50505 { :net = [ [0] "*:50505 IPv4 UDP " ], :fds = [ [0] "/root (cwd)", [1] "/", [2] "/bin/nc.traditional", [3] "/xochikit/ld_poison.so (stat: No such file or directory)", [4] "/dev/tty2", [5] "*:50505" ] } It proved to be very nice formatted and useful for quick discovery thanks to colors provided by the awesome_print gem. However, its output is just a text. One issue is that if I want to share it, I lose colors. I'd also like to fold and unfold parts of objects, quickly jump to specific processes and what not? Adding comments, for example. Thus I want something web-based. What is the best approach to implement features like these? I haven't worked with web interfaces before and I don't have much experience with Ruby.

    Read the article

  • Adatintegrációs rendezvény HOUG tagoknak, DW/BI és Database szakosztály

    - by user645740
    2011. június 29-én szerdán lesz a HOUG szervezet Oracle adatintegrációs rendezvénye. Részvételi feltétel: HOUG tagság! Ha HOUG tag, akkor azért jöjjön, ha Oracle felhasználó és még nem HOUG tag, akkor lépjen be gyorsan a HOUG egyesületbe! Témák: adatintegráció, ELT-ETL, OWB, ODI, GoldenGate, RAC, Active Data Guard, stb. Az Oracle Warehouse Buildernek sok felhasználója van Magyarországon, és nem csupán az adattárházas-BI környezetekben. Az ELT területen Oracle Data Integratorra fókuszál az Oracle, ami heterogén környezetekben kiválóan muködik, azaz nem csak Oracle adatbázisokkal. Katt ide: HOUG szervezet Oracle adatintegrációs rendezvénye.

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • Heroku Postgres: A New SQL Database-as-a-Service

    Idera, a Houston-based company known worldwide for its SQL Server solutions in the realms of backup and recovery, performance monitoring, auditing, security, and more, recently announced that it had won five of SQL Server Magazine's 2011 Community Choice Awards. SQL Server Magazine, a publication produced by Penton Media, offers SQL Server users, both beginning and advanced, a host of hands-on information delivered by SQL Server experts. The magazine presented Idera with 2011 Community Choice Awards for five separate products which will only serve to boost the already strong reputation of it...

    Read the article

  • Finding Stuff in SQL Server Database DDL

    You'd have thought that nothing would be easier than using SQL Server Management Studio (SSMS) for searching through the DDL for both the names and definitions of the structural metadata of your databases, for the occurrence of a particular string of letters. Not so easy, it turns out, though Phil Factor is able to come up with various methods for various purposes.

    Read the article

  • Is it possible to automatically backup a mysql database to dropbox?

    - by Rob
    Is it possible to automatically backup my database to dropbox? If so how can I do it? The key criteria I need it to do is: Be automatic. Be Mac compliant. Be weekly. Sync with dropbox (http://www.dropbox.com) automatically. Be able to backup several databases from several websites. Be free... or relatively cheap! Have a guide on how to setup the solution. Here's a screenshot of my cpanel but there doesn't seem to be an automatic option:

    Read the article

  • Raycasting mouse coordinates to rotated object?

    - by SPL
    I am trying to cast a ray from my mouse to a plane at a specified position with a known width and length and height. I know that you can use the NDC (Normalized Device Coordinates) to cast ray but I don't know how can I detect if the ray actually hit the plane and when it did. The plane is translated -100 on the Y and rotated 60 on the X then translated again -100. Can anyone please give me a good tutorial on this? For a complete noob! I am almost new to matrix and vector transformations.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Is there a way to list all the database queries my wordpress install is making for a given event?

    - by mattloak
    Using a method similar to the one described here: (http://stackoverflow.com/questions/14873/how-do-i-display-database-query-statistics-on-wordpress-site), I can see the total number of queries being made when I load a page. Now I'd like to output a list of the queries that are being made when the page loads. This would allow me to see who my biggest resource hogs are without having to go through the process of elimination of all my plugins and theme scripts. How would I do this? Thanks.

    Read the article

  • How should I model the database for this problem? And which ORM can handle it?

    - by Kristof Claes
    I need to build some sort of a custom CMS for a client of ours. These are some of the functional requirements: Must be able to manage the list of Pages in the site Each Page can contain a number of ColumnGroups A ColumnGroup is nothing more than a list of Columns in a certain ColumnGroupLayout. For example: "one column taking up the entire width of the page", "two columns each taking up half of the width", ... Each Column can contain a number ContentBlocks Examples of a ContentBlock are: TextBlock, NewsBlock, PictureBlock, ... ContentBlocks can be given a certain sorting within a Column A ContentBlock can be put in different Columns so that content can be reused without having to be duplicated. My first quick draft of how this could look like in C# code (we're using ASP.NET 4.0 to develop the CMS) can be found at the bottom of my question. One of the technical requirements is that it must be as easy as possible to add new types of ContentBlocks to the CMS. So I would like model everything as flexible as possible. Unfortunately, I'm already stuck at trying to figure out how the database should look like. One of the problems I'm having has to do with sorting different types of ContentBlocks in a Column. I guess each type of ContentBlock (like TextBlock, NewsBlock, PictureBlock, ...) should have it's own table in the database because each has it's own different fields. A TextBlock might only have a field called Text whereas a NewsBlock might have fields for the Text, the Summary, the PublicationDate, ... Since one Column can have ContentBlocks located in different tables, I guess I'll have to create a many-to-many association for each type of ContentBlock. For example: ColumnTextBlocks, ColumnNewsBlocks and ColumnPictureBlocks. The problem I have with this setup is the sorting of the different ContentBlocks in a column. This could be something like this: TextBlock NewsBlock TextBlock TextBlock PictureBlock Where do I store the sorting number? If I store them in the associaton tables, I'll have to update a lot of tables when changing the sorting order of ContentBlocks in a Column. Is this a good approach to the problem? Basically, my question is: What is the best way to model this keeping in mind that it should be easy to add new types of ContentBlocks? My next question is: What ORM can deal with that kind of modeling? To be honest, we are ORM-virgins at work. I have been reading a bit about Linq-to-SQL and NHibernate, but we have no experience with them. Because of the IList in the Column class (see code below) I think we can rule out Linq-to-SQL, right? Can NHibernate handle the mapping of data from many different tables to one IList? Also keep in mind that this is just a very small portion of the domain. Other parts are Users belonging to a certain UserGroup having certain Permissions on Pages, ColumnGroups, Columns and ContentBlocks. The code (just a quick first draft): public class Page { public int PageID { get; set; } public string Title { get; set; } public string Description { get; set; } public string Keywords { get; set; } public IList<ColumnGroup> ColumnGroups { get; set; } } public class ColumnGroup { public enum ColumnGroupLayout { OneColumn, HalfHalf, NarrowWide, WideNarrow } public int ColumnGroupID { get; set; } public ColumnGroupLayout Layout { get; set; } public IList<Column> Columns { get; set; } } public class Column { public int ColumnID { get; set; } public IList<IContentBlock> ContentBlocks { get; set; } } public interface IContentBlock { string GetSummary(); } public class TextBlock : IContentBlock { public string GetSummary() { return "I am a piece of text."; } } public class NewsBlock : IContentBlock { public string GetSummary() { return "I am a news item."; } }

    Read the article

  • How to implement a score database in Android

    - by Michael Seun Araromi
    I making a 2d game for android using opengl-es technology. It is a space shooting game where the player shoots enemy ships. I want to keep a track of score for the amount of enemy ships destroyed and a record of a local highscore, I want the score to be incremented whenever an enemy is destroyed. I also want a way of displaying both score and highscore on the game screen. I am not farmiliar with databases at all and I will appreciate a clear answer or a link to a good tutorial for my cause. Thanks

    Read the article

  • Optimization ended up in casting an object at each method call

    - by Aybe
    I've been doing some optimization for the following piece of code : public void DrawLine(int x1, int y1, int x2, int y2, int color) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color); } After profiling it about 70% of the time spent was in getting a context for drawing and disposing it. I ended up sketching the following overload : public void DrawLine(int x1, int y1, int x2, int y2, int color, BitmapContext bitmapContext) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color, bitmapContext); } Until here no problems, all the user has to do is to pass a context and performance is really great as a context is created/disposed one time only (previously it was a thousand times per second). The next step was to make it generic in the sense it doesn't depend on a particular framework for rendering (besides .NET obvisouly). So I wrote this method : public void DrawLine(int x1, int y1, int x2, int y2, int color, IDisposable bitmapContext) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color, (BitmapContext)bitmapContext); } Now every time a line is drawn the generic context is casted, this was unexpected for me. Are there any approaches for fixing this design issue ? Note : _bitmap is a WriteableBitmap from WPF BitmapContext is from WriteableBitmapEx library DrawLineBresenham is an extension method from WriteableBitmapEx

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >