Search Results

Search found 563 results on 23 pages for 'c1'.

Page 9/23 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • How to re-arrange Excel database from 1 long row, into 3 short rows and automatically repeat the process?

    - by user326884
    I would appreciate help on the above-mentioned topic. I am unfamiliar with Visual Basic for Excel, so will need step-by-step guidance (if solution is via Visual Basic). For example :- Row 1, Sheet A: A1 B1 C1 D1 E1 F1 G1 H1 I1 To be re-arranged into Sheet B : Row 1 : A1, B1, C1 Row 2 : D1, E1, F1 Row 3 : G1, H1, I1 The Sheet A (database sheet) has a lot of rows (example 3,000 rows), hence the Sheet B is estimated to have 9,000 rows (i.e. 3 x 3,000). Thanking you in anticipation of your speedy response.

    Read the article

  • Redirecting HTTP traffic from a local server on the web

    - by MrJackV
    Here is the situation: I have a webserver (let's call it C1) that is running an apache/php server and it is port forwarded so that I can access it anywhere. However there is another computer within the webserver LAN that has a apache server too (let's call it C2). I cannot change the port forwarding nor I can change the apache server (a.k.a. install custom modules). My question is: is there a way to access C2 within a directory of C1? (e.g. going to www.website.org/random_dir will allow me to browse the root of C2 apache server.) I am trying to change as little as possible of the config/other (e.g. activating modules etc.) Is there a possible solution? Thanks in advance.

    Read the article

  • How to make variable range of cells?

    - by Ertai
    In A column I have a set of numbers (over 1 000). I want to get average of ten of them (a1:a10) and wrtite into next column (B). Now I want to get next ten numbers and get average of them (a11:a20). And so on... How to get this if in C1 i would have number which is range (i.e 10 = a1:a10/a11:a20 ; i.e 25 a1:a25/a26:a50) of the cells? When I change C1 value I want to column B to update automaticaly? Is this possible?

    Read the article

  • How to know which revision or router I do have? [migrated]

    - by Rosamunda
    I´m trying to update my D-link Dir-600 router with the dd-wrt firmware. I´ve searched for it at the site and found that: Revision A1, B1 and B2 are supported, while C isn´t. Now my router has this information on the back: P/N IIR600GNA .... C1G H/W Ver: C1 F/W Ver: 3.01 So I guess the H/W Ver is the revision, and it´s C... so it´s a lost cause? Or maybe because it´s not just C but C1 I could do something with it? Thanks!

    Read the article

  • Recompiling an old fortran 2/4\66 program that was compiled for os\2 need it to run in dos

    - by Mike Hansen
    I am helping an old scientist with some problems and have 1 program that he found and modified about 20 yrs. ago, and runs fine as a 32 bit os\2 executable but i need it to run under dos! I am not a programmer but a good hardware & software man, so I'am pretty stupid about this problem, but here go's I have downloaded 6 different compilers watcom77,silverfrost ftn95,gfortran,2 versions of g77 and f80. Watcom says it is to old of program,find older compiler,silverfrost opens it,debugs, etc. but is changing all the subroutines from "real" to "complex" and vice-vesa,and the g77's seem to install perfectly (library links and etc.) but wont even compile the test.f programs.My problem is 1; to recompile "as is" or "upgrade" the code? PROGRAM xconvlv INTEGER N,N2,M PARAMETER (N=2048,N2=2048,M=128) INTEGER i,isign REAL data(n),respns(m),resp(n),ans(n2),t3(n),DUMMY OPEN(UNIT=1, FILE='C:\QKBAS20\FDATA1.DAT') DO 1 i=1,N READ(1,*) T3(i), data(i), DUMMY continue CLOSE(UNIT-1) do 12 i=1,N respns(i)=data(i) resp(i)=respns(i) continue isign=-1 call convlv(data,N,resp,M,isign,ans) OPEN(UNIT=1,FILE='C:\QKBAS20\FDATA9.DAT') DO 14 i=1,N WRITE(1,*) T3(i), ans(i) continue END SUBROUTINE CONVLV(data,n,respns,m,isign,ans) INTEGER isign,m,n,NMAX REAL data(n),respns(n) COMPLEX ans(n) PARAMETER (NMAX=4096) * uses realft, twofft INTEGER i,no2 COMPLEX fft (NMAX) do 11 i=1, (m-1)/2 respns(n+1-i)=respns(m+1-i) continue do 12 i=(m+3)/2,n-(m-1)/2 respns(i)=0.0 continue call twofft (data,respns,fft,ans,n) no2=n/2 do 13 i=1,no2+1 if (isign.eq.1) then ans(i)=fft(i)*ans(i)/no2 else if (isign.eq.-1) then if (abs(ans(i)) .eq.0.0) pause ans(i)=fft(i)/ans(i)/no2 else pause 'no meaning for isign in convlv' endif continue ans(1)=cmplx(real (ans(1)),real (ans(no2+1))) call realft(ans,n,-1) return END SUBROUTINE realft(data,n,isign) INTEGER isign,n REAL data(n) * uses four1 INTEGER i,i1,i2,i3,i4,n2p3 REAL c1,c2,hli,hir,h2i,h2r,wis,wrs DOUBLE PRECISION theta,wi,wpi,wpr,wr,wtemp theta=3.141592653589793d0/dble(n/2) cl=0.5 if (isign.eq.1) then c2=-0.5 call four1(data,n/2,+1) else c2=0.5 theta=-theta endif (etc.,etc., etc.) SUBROUTINE twofft(data,data2,fft1,fft2,n) INTEGER n REAL data1(n,data2(n) COMPLEX fft1(n), fft2(n) * uses four1 INTEGER j,n2 COMPLEX h1,h2,c1,c2 c1=cmplx(0.5,0.0) c2=cmplx(0.0,-0.5) do 11 j=1,n fft1(j)=cmplx(data1(j),data2(j) continue call four1 (fft1,n,1) fft2(1)=cmplx(aimag(fft1(1)),0.0) fft1(1)=cmplx(real(fft1(1)),0.0) n2=n+2 do 12 j=2,n/2+1 h1=c1*(fft1(j)+conjg(fft1(n2-j))) h2=c2*(fft1(j)-conjg(fft1(n2-j))) fft1(j)=h1 fft1(n2-j)=conjg(h1) fft2(j)=h2 fft2(n2-j)=conjg(h2) continue return END SUBROUTINE four1(data,nn,isign) INTEGER isign,nn REAL data(2*nn) INTEGER i,istep,j,m,mmax,n REAL tempi,tempr DOUBLE PRECISION theta, wi,wpi,wpr,wr,wtemp n=2*nn j=1 do 11 i=1,n,2 if(j.gt.i)then tempr=data(j) tempi=data(j+1) (etc.,etc.,etc.,) continue mmax=istep goto 2 endif return END There are 4 subroutines with this that are about 3 pages of code and whould be much easier to e-mail to someone if their able to help me with this.My e-mail is [email protected] , or if someone could tell me where to get a "working" compiler that could recompile this? THANK-YOU, THANK-YOU,and THANK-YOU for any help with this! The errors Iam getting are; 1.In a call to CONVLV from another procedure,the first argument was of a type REAL(kind=1), it is now a COMPLEX(kind=1) 2.In a call to REALFT from another procedure, ... COMPLEX(kind=1) it is now a REAL(kind=1) 3.In a call to TWOFFT from...COMPLEX(kind-1) it is now a REAL(kind=1) 4.In a previous call to FOUR1, the first argument was of a type REAL(kind=1) it is now a COMPLEX(kind=1).

    Read the article

  • Please help fix and optimize this query

    - by user607217
    I am working on a system to find potential duplicates in our customers table (SQL 2005). I am using the built-in SOUNDEX value that our software computes when customers are added/updated, but I also implemented the double metaphone algorithm for better matching. This is the most-nested query I have created, and I can't help but think there is a better way to do it and I'd like to learn. In the inner-most query I am joining the customer table to the metaphone table I created, then finding customers that have identical pKey (primary phonetic key). I take that, union that with customers that have matching soundex values, and then proceed to score those matches with various text similarity functions. This is currently working, but I would also like to add a union of customers whose aKey (alternate phonetic key) match. This would be identical to "QUERY A" except to substitute on (c1Akey = c2Akey) for the join. However, when I attempt to include that, I get errors when I try to execute my query. Here is the code: --Create aggregate ranking select c1Name, c2Name, nDiff, c1Addr, c2Addr, aDiff, c1SSN, c2SSN, sDiff, c1DOB, c2DOB, dDiff, nDiff+aDiff+dDiff+sDiff as Score ,(sDiff+dDiff)*1.5 + (nDiff+dDiff)*1.5 + (nDiff+sDiff)*1.5 + aDiff *.5 + nDiff *.5 as [Rank] FROM ( --Create match scores for different fields SELECT c1Name, c2Name, c1Addr, c2Addr, c1SSN, c2SSN, c1LTD, c2LTD, c1DOB, c2DOB, dbo.Jaro(c1name, c2name) AS nDiff, dbo.JaroWinkler(c1addr, c2addr) AS aDiff, CASE WHEN c1dob = '1901-01-01' OR c2dob = '1901-01-01' OR c1dob = '1800-01-01' OR c2dob = '1800-01-01' THEN .5 ELSE dbo.SmithWaterman(c1dob, c2dob) END AS dDiff, CASE WHEN c1ssn = '000-00-0000' OR c2ssn = '000-00-0000' THEN .5 ELSE dbo.Jaro(c1ssn, c2ssn) END AS sDiff FROM -- Generate list of possible matches based on multiple phonetic matching fields ( select * from -- List of similar names from pKey field of ##Metaphone table --QUERY A BEGIN (select customers.custno as c1Custno, name as c1Name, haddr as c1Addr, ssn as c1SSN, lasttripdate as c1LTD, dob as c1DOB, soundex as c1Soundex, pkey as c1Pkey, akey as c1Akey from Customers WITH (nolock) join ##Metaphone on customers.custno = ##Metaphone.custno) as c1 JOIN (select customers.custno as c2Custno, name as c2Name, haddr as c2Addr, ssn as c2SSN, lasttripdate as c2LTD, dob as c2DOB, soundex as c2Soundex, pkey as c2Pkey, akey as c2Akey from Customers with (nolock) join ##Metaphone on customers.custno = ##Metaphone.custno) as c2 on (c1Pkey = c2Pkey) and (c1Custno < c2Custno) WHERE (c1Name <> 'PARENT, GUARDIAN') and c1soundex != c2soundex --QUERY A END union --List of similar names from pregenerated SOUNDEX field (select t1.custno, t1.name, t1.haddr, t1.ssn, t1.lasttripdate, t1.dob, t1.[soundex], 0, 0, t2.custno, t2.name, t2.haddr, t2.ssn, t2.lasttripdate, t2.dob, t2.[soundex], 0, 0 from Customers t1 WITH (nolock) join customers t2 with (nolock) on t1.[soundex] = t2.[soundex] and t1.custno < t2.custno where (t1.name <> 'PARENT, GUARDIAN')) ) as a ) as b where (sDiff+dDiff)*1.5 + (nDiff+dDiff)*1.5 + (nDiff+sDiff)*1.5 + aDiff *.5 + nDiff *.5 >= 7.5 order by [rank] desc, score desc Previously, I was using joins such as on c1.pkey = c2.pkey or c1.akey = c2.akey or c1.soundex = c2.soundex but the performance was horrendous, and using unions seems to be working a lot better. Out of 103K customers, tt is currently generating a list of 8.5M potential matches (based on the phonetic codes) in 2.25 minutes, and then taking another 2 to score, rank and filter those down to about 3000. So I am happy with the performance, I just can't help but think there is a better way to structure this, and I need help adding the extra union condition. Thanks!

    Read the article

  • Physics System ignores collision in some rare cases

    - by Gajoo
    I've been developing a simple physics engine for my game. since the game physics is very simple I've decided to increase accuracy a little bit. Instead of formal integration methods like fourier or RK4, I'm directly computing the results after delta time "dt". based on the very first laws of physics : dx = 0.5 * a * dt^2 + v0 * dt dv = a * dt where a is acceleration and v0 is object's previous velocity. Also to handle collisions I've used a method which is somehow different from those I've seen so far. I'm detecting all the collision in the given time frame, stepping the world forward to the nearest collision, resolving it and again check for possible collisions. As I said the world consist of very simple objects, so I'm not loosing any performance due to multiple collision checking. First I'm checking if the ball collides with any walls around it (which is working perfectly) and then I'm checking if it collides with the edges of the walls (yellow points in the picture). the algorithm seems to work without any problem except some rare cases, in which the collision with points are ignored. I've tested everything and all the variables seem to be what they should but after leaving the system work for a minute or two the system the ball passes through one of those points. Here is collision portion of my code, hopefully one of you guys can give me a hint where to look for a potential bug! void PhysicalWorld::checkForPointCollision(Vec2 acceleration, PhysicsComponent& ball, Vec2& collisionNormal, float& collisionTime, Vec2 target) { // this function checks if there will be any collision between a circle and a point // ball contains informations about the circle (it's current velocity, position and radius) // collisionNormal is an output variable // collisionTime is also an output varialbe // target is the point I want to check for collisions Vec2 V = ball.mVelocity; Vec2 A = acceleration; Vec2 P = ball.mPosition - target; float wallWidth = mMap->getWallWidth() / (mMap->getWallWidth() + mMap->getHallWidth()) / 2; float r = ball.mRadius / (mMap->getWallWidth() + mMap->getHallWidth()); // r is ball radius scaled to match actual rendered object. if (A.any()) // todo : I need to first correctly solve the collisions in case there is no acceleration return; if (V.any()) // if object is not moving there will be no collisions! { float D = P.x * V.y - P.y * V.x; float Delta = r*r*V.length2() - D*D; if(Delta < eps) return; Delta = sqrt(Delta); float sgnvy = V.y > 0 ? 1: (V.y < 0?-1:0); Vec2 c1(( D*V.y+sgnvy*V.x*Delta) / V.length2(), (-D*V.x+fabs(V.y)*Delta) / V.length2()); Vec2 c2(( D*V.y-sgnvy*V.x*Delta) / V.length2(), (-D*V.x-fabs(V.y)*Delta) / V.length2()); float t1 = (c1.x - P.x) / V.x; float t2 = (c2.x - P.x) / V.x; if(t1 > eps && t1 <= collisionTime) { collisionTime = t1; collisionNormal = c1; } if(t2 > eps && t2 <= collisionTime) { collisionTime = t2; collisionNormal = c2; } } } // this function should step the world forward by dt. it doesn't check for collision of any two balls (components) // it just checks if there is a collision between the current component and 4 points forming a rectangle around it. void PhysicalWorld::step(float dt) { for (unsigned i=0;i<mObjects.size();i++) { PhysicsComponent &current = *mObjects[i]; Vec2 acceleration = current.mForces * current.mInvMass; float rt=dt; // stores how much more the world should advance while(rt > eps) { float collisionTime = rt; Vec2 collisionNormal = Vec2(0,0); float halfWallWidth = mMap->getWallWidth() / (mMap->getWallWidth() + mMap->getHallWidth()) / 2; // we check if there is any collision with any of those 4 points around the ball // if there is a collision both collisionNormal and collisionTime variables will change // after these functions collisionTime will be exactly the value of nearest collision (if any) // and if there was, collisionNormal will report in which direction the ball should return. checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2(floor(current.mPosition.x) + halfWallWidth,floor(current.mPosition.y) + halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2(floor(current.mPosition.x) + halfWallWidth, ceil(current.mPosition.y) - halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2( ceil(current.mPosition.x) - halfWallWidth,floor(current.mPosition.y) + halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2( ceil(current.mPosition.x) - halfWallWidth, ceil(current.mPosition.y) - halfWallWidth)); // either if there is a collision or if there is not we step the forward since we are sure there will be no collision before collisionTime current.mPosition += collisionTime * (collisionTime * acceleration * 0.5 + current.mVelocity); current.mVelocity += collisionTime * acceleration; // if the ball collided with anything collisionNormal should be at least none zero in one of it's axis if (collisionNormal.any()) { collisionNormal *= Dot(collisionNormal, current.mVelocity) / collisionNormal.length2(); current.mVelocity -= 2 * collisionNormal; // simply reverse velocity along collision normal direction } rt -= collisionTime; } // reset all forces for current object so it'll be ready for later game event current.mForces.zero(); } }

    Read the article

  • Subterranean IL: The ThreadLocal type

    - by Simon Cooper
    I came across ThreadLocal<T> while I was researching ConcurrentBag. To look at it, it doesn't really make much sense. What's all those extra Cn classes doing in there? Why is there a GenericHolder<T,U,V,W> class? What's going on? However, digging deeper, it's a rather ingenious solution to a tricky problem. Thread statics Declaring that a variable is thread static, that is, values assigned and read from the field is specific to the thread doing the reading, is quite easy in .NET: [ThreadStatic] private static string s_ThreadStaticField; ThreadStaticAttribute is not a pseudo-custom attribute; it is compiled as a normal attribute, but the CLR has in-built magic, activated by that attribute, to redirect accesses to the field based on the executing thread's identity. TheadStaticAttribute provides a simple solution when you want to use a single field as thread-static. What if you want to create an arbitary number of thread static variables at runtime? Thread-static fields can only be declared, and are fixed, at compile time. Prior to .NET 4, you only had one solution - thread local data slots. This is a lesser-known function of Thread that has existed since .NET 1.1: LocalDataStoreSlot threadSlot = Thread.AllocateNamedDataSlot("slot1"); string value = "foo"; Thread.SetData(threadSlot, value); string gettedValue = (string)Thread.GetData(threadSlot); Each instance of LocalStoreDataSlot mediates access to a single slot, and each slot acts like a separate thread-static field. As you can see, using thread data slots is quite cumbersome. You need to keep track of LocalDataStoreSlot objects, it's not obvious how instances of LocalDataStoreSlot correspond to individual thread-static variables, and it's not type safe. It's also relatively slow and complicated; the internal implementation consists of a whole series of classes hanging off a single thread-static field in Thread itself, using various arrays, lists, and locks for synchronization. ThreadLocal<T> is far simpler and easier to use. ThreadLocal ThreadLocal provides an abstraction around thread-static fields that allows it to be used just like any other class; it can be used as a replacement for a thread-static field, it can be used in a List<ThreadLocal<T>>, you can create as many as you need at runtime. So what does it do? It can't just have an instance-specific thread-static field, because thread-static fields have to be declared as static, and so shared between all instances of the declaring type. There's something else going on here. The values stored in instances of ThreadLocal<T> are stored in instantiations of the GenericHolder<T,U,V,W> class, which contains a single ThreadStatic field (s_value) to store the actual value. This class is then instantiated with various combinations of the Cn types for generic arguments. In .NET, each separate instantiation of a generic type has its own static state. For example, GenericHolder<int,C0,C1,C2> has a completely separate s_value field to GenericHolder<int,C1,C14,C1>. This feature is (ab)used by ThreadLocal to emulate instance thread-static fields. Every time an instance of ThreadLocal is constructed, it is assigned a unique number from the static s_currentTypeId field using Interlocked.Increment, in the FindNextTypeIndex method. The hexadecimal representation of that number then defines the specific Cn types that instantiates the GenericHolder class. That instantiation is therefore 'owned' by that instance of ThreadLocal. This gives each instance of ThreadLocal its own ThreadStatic field through a specific unique instantiation of the GenericHolder class. Although GenericHolder has four type variables, the first one is always instantiated to the type stored in the ThreadLocal<T>. This gives three free type variables, each of which can be instantiated to one of 16 types (C0 to C15). This puts an upper limit of 4096 (163) on the number of ThreadLocal<T> instances that can be created for each value of T. That is, there can be a maximum of 4096 instances of ThreadLocal<string>, and separately a maximum of 4096 instances of ThreadLocal<object>, etc. However, there is an upper limit of 16384 enforced on the total number of ThreadLocal instances in the AppDomain. This is to stop too much memory being used by thousands of instantiations of GenericHolder<T,U,V,W>, as once a type is loaded into an AppDomain it cannot be unloaded, and will continue to sit there taking up memory until the AppDomain is unloaded. The total number of ThreadLocal instances created is tracked by the ThreadLocalGlobalCounter class. So what happens when either limit is reached? Firstly, to try and stop this limit being reached, it recycles GenericHolder type indexes of ThreadLocal instances that get disposed using the s_availableIndices concurrent stack. This allows GenericHolder instantiations of disposed ThreadLocal instances to be re-used. But if there aren't any available instantiations, then ThreadLocal falls back on a standard thread local slot using TLSHolder. This makes it very important to dispose of your ThreadLocal instances if you'll be using lots of them, so the type instantiations can be recycled. The previous way of creating arbitary thread-static variables, thread data slots, was slow, clunky, and hard to use. In comparison, ThreadLocal can be used just like any other type, and each instance appears from the outside to be a non-static thread-static variable. It does this by using the CLR type system to assign each instance of ThreadLocal its own instantiated type containing a thread-static field, and so delegating a lot of the bookkeeping that thread data slots had to do to the CLR type system itself! That's a very clever use of the CLR type system.

    Read the article

  • Take Two: Comparing JVMs on ARM/Linux

    - by user12608080
    Although the intent of the previous article, entitled Comparing JVMs on ARM/Linux, was to introduce and highlight the availability of the HotSpot server compiler (referred to as c2) for Java SE-Embedded ARM v7,  it seems, based on feedback, that everyone was more interested in the OpenJDK comparisons to Java SE-E.  In fact there were two main concerns: The fact that the previous article compared Java SE-E 7 against OpenJDK 6 might be construed as an unlevel playing field because version 7 is newer and therefore potentially more optimized. That the generic compiler settings chosen to build the OpenJDK implementations did not put those versions in a particularly favorable light. With those considerations in mind, we'll institute the following changes to this version of the benchmarking: In order to help alleviate an additional concern that there is some sort of benchmark bias, we'll use a different suite, called DaCapo.  Funded and supported by many prestigious organizations, DaCapo's aim is to benchmark real world applications.  Further information about DaCapo can be found at http://dacapobench.org. At the suggestion of Xerxes Ranby, who has been a great help through this entire exercise, a newer Linux distribution will be used to assure that the OpenJDK implementations were built with more optimal compiler settings.  The Linux distribution in this instance is Ubuntu 11.10 Oneiric Ocelot. Having experienced difficulties getting Ubuntu 11.10 to run on the original D2Plug ARMv7 platform, for these benchmarks, we'll switch to an embedded system that has a supported Ubuntu 11.10 release.  That platform is the Freescale i.MX53 Quick Start Board.  It has an ARMv7 Coretex-A8 processor running at 1GHz with 1GB RAM. We'll limit comparisons to 4 JVM implementations: Java SE-E 7 Update 2 c1 compiler (default) Java SE-E 6 Update 30 (c1 compiler is the only option) OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 CACAO build 1.1.0pre2 OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 JamVM build-1.6.0-devel Certain OpenJDK implementations were eliminated from this round of testing for the simple reason that their performance was not competitive.  The Java SE 7u2 c2 compiler was also removed because although quite respectable, it did not perform as well as the c1 compilers.  Recall that c2 works optimally in long-lived situations.  Many of these benchmarks completed in a relatively short period of time.  To get a feel for where c2 shines, take a look at the first chart in this blog. The first chart that follows includes performance of all benchmark runs on all platforms.  Later on we'll look more at individual tests.  In all runs, smaller means faster.  The DaCapo aficionado may notice that only 10 of the 14 DaCapo tests for this version were executed.  The reason for this is that these 10 tests represent the only ones successfully completed by all 4 JVMs.  Only the Java SE-E 6u30 could successfully run all of the tests.  Both OpenJDK instances not only failed to complete certain tests, but also experienced VM aborts too. One of the first observations that can be made between Java SE-E 6 and 7 is that, for all intents and purposes, they are on par with regards to performance.  While it is a fact that successive Java SE releases add additional optimizations, it is also true that Java SE 7 introduces additional complexity to the Java platform thus balancing out any potential performance gains at this point.  We are still early into Java SE 7.  We would expect further performance enhancements for Java SE-E 7 in future updates. In comparing Java SE-E to OpenJDK performance, among both OpenJDK VMs, Cacao results are respectable in 4 of the 10 tests.  The charts that follow show the individual results of those four tests.  Both Java SE-E versions do win every test and outperform Cacao in the range of 9% to 55%. For the remaining 6 tests, Java SE-E significantly outperforms Cacao in the range of 114% to 311% So it looks like OpenJDK results are mixed for this round of benchmarks.  In some cases, performance looks to have improved.  But in a majority of instances, OpenJDK still lags behind Java SE-Embedded considerably. Time to put on my asbestos suit.  Let the flames begin...

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Root certificate authority works windows/linux but not mac osx - (malformed)

    - by AKwhat
    I have created a self-signed root certificate authority which if I install onto windows, linux, or even using the certificate store in firefox (windows/linux/macosx) will work perfectly with my terminating proxy. I have installed it into the system keychain and I have set the certificate to always trust. Within the chrome browser details it says "The certificate that Chrome received during this connection attempt is not formatted correctly, so Chrome cannot use it to protect your information. Error type: Malformed certificate" I used this code to create the certificate: openssl genrsa -des3 -passout pass:***** -out private/server.key 4096 openssl req -batch -passin pass:***** -new -x509 -nodes -sha1 -days 3600 -key private/server.key -out server.crt -config ../openssl.cnf If the issue is NOT that it is malformed (because it works everywhere else) then what else could it be? Am I installing it incorrectly? To be clear: Within the windows/linux OS, all browsers work perfectly. Within mac only firefox works if it uses its internal certificate store and not the keychain. It's the keychain method of importing a certificate that causes the issue. Thus, all browsers using the keychain will not work. Root CA Cert: -----BEGIN CERTIFICATE----- **some base64 stuff** -----END CERTIFICATE----- Intermediate CA Cert: Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha1WithRSAEncryption Issuer: C=*****, ST=*******, L=******, O=*******, CN=******/emailAddress=****** Validity Not Before: May 21 13:57:32 2014 GMT Not After : Jun 20 13:57:32 2014 GMT Subject: C=*****, ST=********, O=*******, CN=*******/emailAddress=******* Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (4096 bit) Modulus (4096 bit): 00:e7:2d:75:38:23:02:8e:b9:8d:2f:33:4c:2a:11: 6d:d4:f8:29:ab:f3:fc:12:00:0f:bb:34:ec:35:ed: a5:38:10:1e:f3:54:c2:69:ae:3b:22:c0:0d:00:97: 08:da:b9:c9:32:c0:c6:b1:8b:22:7e:53:ea:69:e2: 6d:0f:bd:f5:96:b2:d0:0d:b2:db:07:ba:f1:ce:53: 8a:5e:e0:22:ce:3e:36:ed:51:63:21:e7:45:ad:f9: 4d:9b:8f:7f:33:4c:ed:fc:a6:ac:16:70:f5:96:36: 37:c8:65:47:d1:d3:12:70:3e:8d:2f:fb:9f:94:e0: c9:5f:d0:8c:30:e0:04:23:38:22:e5:d9:84:15:b8: 31:e7:a7:28:51:b8:7f:01:49:fb:88:e9:6c:93:0e: 63:eb:66:2b:b4:a0:f0:31:33:8b:b4:04:84:1f:9e: d5:ed:23:cc:bf:9b:8e:be:9a:5c:03:d6:4f:1a:6f: 2d:8f:47:60:6c:89:c5:f0:06:df:ac:cb:26:f8:1a: 48:52:5e:51:a0:47:6a:30:e8:bc:88:8b:fd:bb:6b: c9:03:db:c2:46:86:c0:c5:a5:45:5b:a9:a3:61:35: 37:e9:fc:a1:7b:ae:71:3a:5c:9c:52:84:dd:b2:86: b3:2e:2e:7a:5b:e1:40:34:4a:46:f0:f8:43:26:58: 30:87:f9:c6:c9:bc:b4:73:8b:fc:08:13:33:cc:d0: b7:8a:31:e9:38:a3:a9:cc:01:e2:d4:c2:a5:c1:55: 52:72:52:2b:06:a3:36:30:0c:5c:29:1a:dd:14:93: 2b:9d:bf:ac:c1:2d:cd:3f:89:1f:bc:ad:a4:f2:bd: 81:77:a9:f4:f0:b9:50:9e:fb:f5:da:ee:4e:b7:66: e5:ab:d1:00:74:29:6f:01:28:32:ea:7d:3f:b3:d7: 97:f2:60:63:41:0f:30:6a:aa:74:f4:63:4f:26:7b: 71:ed:57:f1:d4:99:72:61:f4:69:ad:31:82:76:67: 21:e1:32:2f:e8:46:d3:28:61:b1:10:df:4c:02:e5: d3:cc:22:30:a4:bb:81:10:dc:7d:49:94:b2:02:2d: 96:7f:e5:61:fa:6b:bd:22:21:55:97:82:18:4e:b5: a0:67:2b:57:93:1c:ef:e5:d2:fb:52:79:95:13:11: 20:06:8c:fb:e7:0b:fd:96:08:eb:17:e6:5b:b5:a0: 8d:dd:22:63:99:af:ad:ce:8c:76:14:9a:31:55:d7: 95:ea:ff:10:6f:7c:9c:21:00:5e:be:df:b0:87:75: 5d:a6:87:ca:18:94:e7:6a:15:fe:27:dd:28:5e:c0: ad:d2:91:d3:2d:8e:c3:c0:9f:fb:ff:c0:36:7e:e2: d7:bc:41 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost, DNS:dropbox.com, DNS:*.dropbox.com, DNS:filedropper.com, DNS:*.filedropper.com X509v3 Subject Key Identifier: F3:E5:38:5B:3C:AF:1C:73:C1:4C:7D:8B:C8:A1:03:82:65:0D:FF:45 X509v3 Authority Key Identifier: keyid:2B:37:39:7B:9F:45:14:FE:F8:BC:CA:E0:6E:B4:5F:D6:1A:2B:D7:B0 DirName:/C=****/ST=******/L=*******/O=*******/CN=******/emailAddress=******* serial:EE:8C:A3:B4:40:90:B0:62 X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha1WithRSAEncryption 46:2a:2c:e0:66:e3:fa:c6:80:b6:81:e7:db:c3:29:ab:e7:1c: f0:d9:a0:b7:a9:57:8c:81:3e:30:8f:7d:ef:f7:ed:3c:5f:1e: a5:f6:ae:09:ab:5e:63:b4:f6:d6:b6:ac:1c:a0:ec:10:19:ce: dd:5a:62:06:b4:88:5a:57:26:81:8e:38:b9:0f:26:cd:d9:36: 83:52:ec:df:f4:63:ce:a1:ba:d4:1c:ec:b6:66:ed:f0:32:0e: 25:87:79:fa:95:ee:0f:a0:c6:2d:8f:e9:fb:11:de:cf:26:fa: 59:fa:bd:0b:74:76:a6:5d:41:0d:cd:35:4e:ca:80:58:2a:a8: 5d:e4:d8:cf:ef:92:8d:52:f9:f2:bf:65:50:da:a8:10:1b:5e: 50:a7:7e:57:7b:94:7f:5c:74:2e:80:ae:1e:24:5f:0b:7b:7e: 19:b6:b5:bd:9d:46:5a:e8:47:43:aa:51:b3:4b:3f:12:df:7f: ef:65:21:85:c2:f6:83:84:d0:8d:8b:d9:6d:a8:f9:11:d4:65: 7d:8f:28:22:3c:34:bb:99:4e:14:89:45:a4:62:ed:52:b1:64: 9a:fd:08:cd:ff:ca:9e:3b:51:81:33:e6:37:aa:cb:76:01:90: d1:39:6f:6a:8b:2d:f5:07:f8:f4:2a:ce:01:37:ba:4b:7f:d4: 62:d7:d6:66:b8:78:ad:0b:23:b6:2e:b0:9a:fc:0f:8c:4c:29: 86:a0:bc:33:71:e5:7f:aa:3e:0e:ca:02:e1:f6:88:f0:ff:a2: 04:5a:f5:d7:fe:7d:49:0a:d2:63:9c:24:ed:02:c7:4d:63:e6: 0c:e1:04:cd:a4:bf:a8:31:d3:10:db:b4:71:48:f7:1a:1b:d9: eb:a7:2e:26:00:38:bd:a8:96:b4:83:09:c9:3d:79:90:e1:61: 2c:fc:a0:2c:6b:7d:46:a8:d7:17:7f:ae:60:79:c1:b6:5c:f9: 3c:84:64:7b:7f:db:e9:f1:55:04:6e:b5:d3:5e:d3:e3:13:29: 3f:0b:03:f2:d7:a8:30:02:e1:12:f4:ae:61:6f:f5:4b:e9:ed: 1d:33:af:cd:9b:43:42:35:1a:d4:f6:b9:fb:bf:c9:8d:6c:30: 25:33:43:49:32:43:a5:a8:d8:82:ef:b0:a6:bd:8b:fb:b6:ed: 72:fd:9a:8f:00:3b:97:a3:35:a4:ad:26:2f:a9:7d:74:08:82: 26:71:40:f9:9b:01:14:2e:82:fb:2f:c0:11:51:00:51:07:f9: e1:f6:1f:13:6e:03:ee:d7:85:c2:64:ce:54:3f:15:d4:d7:92: 5f:87:aa:1e:b4:df:51:77:12:04:d2:a5:59:b3:26:87:79:ce: ee:be:60:4e:87:20:5c:7f -----BEGIN CERTIFICATE----- **some base64 stuff** -----END CERTIFICATE-----

    Read the article

  • Puppet's automatically generated certificates failing

    - by gparent
    I am running a default configuration of Puppet on Debian Squeeze 6.0.4. The server's FQDN is master.example.com. The client's FQDN is client.example.com. I am able to contact the puppet master and send a CSR. I sign it using puppetca -sa but the client will still not connect. Date of both machines is within 2 seconds of Tue Apr 3 20:59:00 UTC 2012 as I wrote this sentence. This is what appears in /var/log/syslog: Apr 3 17:03:52 localhost puppet-agent[18653]: Reopening log files Apr 3 17:03:52 localhost puppet-agent[18653]: Starting Puppet client version 2.6.2 Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed Apr 3 17:03:53 localhost puppet-agent[18653]: Using cached catalog Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog; skipping run Here is some interesting output: OpenSSL client test: client:~# openssl s_client -host master.example.com -port 8140 -cert /var/lib/puppet/ssl/certs/client.example.com.pem -key /var/lib/puppet/ssl/private_keys/client.example.com.pem -CAfile /var/lib/puppet/ssl/certs/ca.pem CONNECTED(00000003) depth=1 /CN=Puppet CA: master.example.com verify return:1 depth=0 /CN=master.example.com verify error:num=7:certificate signature failure verify return:1 depth=0 /CN=master.example.com verify return:1 18509:error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert decrypt error:s3_pkt.c:1102:SSL alert number 51 18509:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: client:~# master's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/master.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 2 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:28 2012 GMT Not After : Apr 2 20:01:28 2017 GMT Subject: CN=master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a9:c1:f9:4c:cd:0f:68:84:7b:f4:93:16:20:44: 7a:2b:05:8e:57:31:05:8e:9c:c8:08:68:73:71:39: c1:86:6a:59:93:6e:53:aa:43:11:83:5b:2d:8c:7d: 54:05:65:c1:e1:0e:94:4a:f0:86:58:c3:3d:4f:f3: 7d:bd:8e:29:58:a6:36:f4:3e:b2:61:ec:53:b5:38: 8e:84:ac:5f:a3:e3:8c:39:bd:cf:4f:3c:ff:a9:65: 09:66:3c:ba:10:14:69:d5:07:57:06:28:02:37:be: 03:82:fb:90:8b:7d:b3:a5:33:7b:9b:3a:42:51:12: b3:ac:dd:d5:58:69:a9:8a:ed Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 7b:2c:4f:c2:76:38:ab:03:7f:c6:54:d9:78:1d:ab:6c:45:ab: 47:02:c7:fd:45:4e:ab:b5:b6:d9:a7:df:44:72:55:0c:a5:d0: 86:58:14:ae:5f:6f:ea:87:4d:78:e4:39:4d:20:7e:3d:6d:e9: e2:5e:d7:c9:3c:27:43:a4:29:44:85:a1:63:df:2f:55:a9:6a: 72:46:d8:fb:c7:cc:ca:43:e7:e1:2c:fe:55:2a:0d:17:76:d4: e5:49:8b:85:9f:fa:0e:f6:cc:e8:28:3e:8b:47:b0:e1:02:f0: 3d:73:3e:99:65:3b:91:32:c5:ce:e4:86:21:b2:e0:b4:15:b5: 22:63 root@master:/etc/puppet# CA's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/ca.pem Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:05 2012 GMT Not After : Apr 2 20:01:05 2017 GMT Subject: CN=Puppet CA: master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:b5:2c:3e:26:a3:ae:43:b8:ed:1e:ef:4d:a1:1e: 82:77:78:c2:98:3f:e2:e0:05:57:f0:8d:80:09:36: 62:be:6c:1a:21:43:59:1d:e9:b9:4d:e0:9c:fa:09: aa:12:a1:82:58:fc:47:31:ed:ad:ad:73:01:26:97: ef:d2:d6:41:6b:85:3b:af:70:00:b9:63:e9:1b:c3: ce:57:6d:95:0e:a6:d2:64:bd:1f:2c:1f:5c:26:8e: 02:fd:d3:28:9e:e9:8f:bc:46:bb:dd:25:db:39:57: 81:ed:e5:c8:1f:3d:ca:39:cf:e7:f3:63:75:f6:15: 1f:d4:71:56:ed:84:50:fb:5d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:TRUE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Certificate Sign, CRL Sign X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C Signature Algorithm: sha1WithRSAEncryption 1d:cd:c6:65:32:42:a5:01:62:46:87:10:da:74:7e:8b:c8:c9: 86:32:9e:c2:2e:c1:fd:00:79:f0:ef:d8:73:dd:7e:1b:1a:3f: cc:64:da:a3:38:ad:49:4e:c8:4d:e3:09:ba:bc:66:f2:6f:63: 9a:48:19:2d:27:5b:1d:2a:69:bf:4f:f4:e0:67:5e:66:84:30: e5:85:f4:49:6e:d0:92:ae:66:77:50:cf:45:c0:29:b2:64:87: 12:09:d3:10:4d:91:b6:f3:63:c4:26:b3:fa:94:2b:96:18:1f: 9b:a9:53:74:de:9c:73:a4:3a:8d:bf:fa:9c:c0:42:9d:78:49: 4d:70 root@master:/etc/puppet# Client's certificate: client:~# openssl x509 -text -noout -in /var/lib/puppet/ssl/certs/client.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3 (0x3) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:36 2012 GMT Not After : Apr 2 20:01:36 2017 GMT Subject: CN=client.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:ae:88:6d:9b:e3:b1:fc:47:07:d6:bf:ea:53:d1: 14:14:9b:35:e6:70:43:e0:58:35:76:ac:c5:9d:86: 02:fd:77:28:fc:93:34:65:9d:dd:0b:ea:21:14:4d: 8a:95:2e:28:c9:a5:8d:a2:2c:0e:1c:a0:4c:fa:03: e5:aa:d3:97:98:05:59:3c:82:a9:7c:0e:e9:df:fd: 48:81:dc:33:dc:88:e9:09:e4:19:d6:e4:7b:92:33: 31:73:e4:f2:9c:42:75:b2:e1:9f:d9:49:8c:a7:eb: fa:7d:cb:62:22:90:1c:37:3a:40:95:a7:a0:3b:ad: 8e:12:7c:6e:ad:04:94:ed:47 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 33:1f:ec:3c:91:5a:eb:c6:03:5f:a1:58:60:c3:41:ed:1f:fe: cb:b2:40:11:63:4d:ba:18:8a:8b:62:ba:ab:61:f5:a0:6c:0e: 8a:20:56:7b:10:a1:f9:1d:51:49:af:70:3a:05:f9:27:4a:25: d4:e6:88:26:f7:26:e0:20:30:2a:20:1d:c4:d3:26:f1:99:cf: 47:2e:73:90:bd:9c:88:bf:67:9e:dd:7c:0e:3a:86:6b:0b:8d: 39:0f:db:66:c0:b6:20:c3:34:84:0e:d8:3b:fc:1c:a8:6c:6c: b1:19:76:65:e6:22:3c:bf:ff:1c:74:bb:62:a0:46:02:95:fa: 83:41 client:~#

    Read the article

  • Linq: Converting flat structure to hierarchical

    - by Stefan Steinegger
    What is the easiest and somewhat efficient way to convert a flat structure: object[][] rawData = new object[][] { { "A1", "B1", "C1" }, { "A1", "B1", "C2" }, { "A2", "B2", "C3" }, { "A2", "B2", "C4" } // .. more }; into a hierarchical structure: class X { public X () { Cs = new List<string>(); } public string A { get; set; } public string B { get; set; } public List<string> Cs { get; private set; } } the result should look like this // pseudo code which describes structure: result = { new X() { A = "A1", B = "B1", Cs = { "C1", "C2" } }, new X() { A = "A2", B = "B2", Cs = { "C3", "C4" } } } Preferably using Linq extension methods. Target class X could be changed (eg. a public setter for the List), only if not possible / useful as it is now.

    Read the article

  • Error: (subscript) logical subscript too long

    - by frespider
    Can some one let me know why I am getting this error and how I can fix it? Here is the code What I am trying to do is remove the rows that associated 1's if the column of that one's less than 10 a0=rep(1,40) a=rep(0:1,20) b=c(rep(1,20),rep(0,20)) c0=c(rep(0,12),rep(1,28)) c1=c(rep(1,5),rep(0,35)) c2=c(rep(1,8),rep(0,32)) c3=c(rep(1,23),rep(0,17)) c4=c(rep(1,6),rep(0,34)) x=matrix(cbind(a0,a,b,c0,c1,c2,c3,c4),nrow=40,ncol=8) nam <- paste("V",2:9,sep="") colnames(x)<-nam dat <- cbind(y=rnorm(40,50,7),x) #=================================== toSum <- colSums(dat) Col <- Val <- NULL for(i in 1:length(toSum)){ if(toSum[i]<10){ Col <- c(Col,colnames(dat)[i]) Val <- c(Val,toSum[i])} } cs <- colSums(dat) < 10 indx <- dat[,which(cs)]==0 for(i in 1:dim(indx)[2]){ datnw <- dat[indx[,i],] dat <- datnw} datnw2 <- dat[, -which(cs)] Thanks

    Read the article

  • mysql query using jdbc

    - by S.PRATHIBA
    Hi all, I have the following table: Service_ID feedback 31 1 32 1 33 1 1 I have the sample code to find the sum: ResultSet res = st.executeQuery("SELECT Service_ID,SUM(consumer_feedback) FROM consumer5 group by Service_ID"); while (res.next()) { int data=res.getInt(1); System.out.println(data); System.out.println("\n\n"); int c1 = res.getInt(2); e[m]=res.getInt(2); System.out.println("\n \n m is "+m+" e[m] is "+e[m]); if(e[m]<0) e[m]=0; m++; System.out.print(c1); System.out.println("\t\t"); } i have to get the output as 31 1 32 1 33 1 I am getting it.But for my project i have 34,35 also.I should get theoutput as 31 1 32 1 33 1 34 0 35 0

    Read the article

  • Create a grid in WPF as Template programmatically

    - by wickie79
    I want to create a basic user control with a style programmatically. In this style i want to add a Grid (no problem), but i dont can add column definitions to this grid. My example code is ControlTemplate templ = new ControlTemplate(); FrameworkElementFactory mainPanel = new FrameworkElementFactory(typeof(DockPanel)); mainPanel.SetValue(DockPanel.LastChildFillProperty, true); FrameworkElementFactory headerPanel = new FrameworkElementFactory(typeof(StackPanel)); headerPanel.SetValue(StackPanel.OrientationProperty, Orientation.Horizontal); headerPanel.SetValue(DockPanel.DockProperty, Dock.Top); mainPanel.AppendChild(headerPanel); FrameworkElementFactory headerImg = new FrameworkElementFactory(typeof(Image)); headerImg.SetValue(Image.MarginProperty, new Thickness(5)); headerImg.SetValue(Image.HeightProperty, 32d); headerImg.SetBinding(Image.SourceProperty, new Binding("ElementImage") { RelativeSource = new RelativeSource(RelativeSourceMode.TemplatedParent) }); headerPanel.AppendChild(headerImg); FrameworkElementFactory headerTitle = new FrameworkElementFactory(typeof(TextBlock)); headerTitle.SetValue(TextBlock.FontSizeProperty, 16d); headerTitle.SetValue(TextBlock.VerticalAlignmentProperty, VerticalAlignment.Center); headerTitle.SetBinding(TextBlock.TextProperty, new Binding("Title") { RelativeSource = new RelativeSource(RelativeSourceMode.TemplatedParent) }); headerPanel.AppendChild(headerTitle); FrameworkElementFactory mainGrid = new FrameworkElementFactory(typeof(Grid)); FrameworkElementFactory c1 = new FrameworkElementFactory(typeof(ColumnDefinition)); c1.SetValue(ColumnDefinition.WidthProperty, new GridLength(1, GridUnitType.Star)); FrameworkElementFactory c2 = new FrameworkElementFactory(typeof(ColumnDefinition)); c2.SetValue(ColumnDefinition.WidthProperty, new GridLength(1, GridUnitType.Auto)); FrameworkElementFactory c3 = new FrameworkElementFactory(typeof(ColumnDefinition)); c3.SetValue(ColumnDefinition.WidthProperty, new GridLength(3, GridUnitType.Star)); FrameworkElementFactory colDefinitions = new FrameworkElementFactory(typeof(ColumnDefinitionCollection)); colDefinitions.AppendChild(c1); colDefinitions.AppendChild(c2); colDefinitions.AppendChild(c3); mainGrid.AppendChild(colDefinitions); mainPanel.AppendChild(mainGrid); FrameworkElementFactory content = new FrameworkElementFactory(typeof(ContentPresenter)); content.SetBinding(ContentPresenter.ContentProperty, new Binding() { RelativeSource = new RelativeSource(RelativeSourceMode.TemplatedParent), Path = new PropertyPath("Content") }); mainGrid.AppendChild(content); templ.VisualTree = mainPanel; Style mainStyle = new Style(); mainStyle.Setters.Add(new Setter(UserControl.TemplateProperty, templ)); this.Style = mainStyle; But the creation of FrameworkElementFactory with type ColumnDefinitionCollection will throw an exception "'ColumnDefinitionCollection' type must derive from FrameworkElement, FrameworkContentElement, or Visual3D." Who can help me?

    Read the article

  • Zlib compression in boost::iostreams not compatible with zlib.NET

    - by Johan
    Hello, I want to send compressed data between my C# to a C++ application in ZLIB format. In C++, I use the zlib_compressor/zlib_decompressor available in boost::iostreams. In C#, I am currently using the ZOutputStream available in the zlib.NET library. First of all, when I compress the same data using both libraries, the results look different: boost::iostreams::zlib_compressor: FF 13 49 48 00 00 01 00 01 00 00 00 63 61 60 60 F8 00 C4 C1 25 45 99 79 E9 23 87 04 00 zlib.NET (zlib.ZOutputStream): FF 13 49 48 00 00 01 00 01 00 00 00 78 9C 63 61 60 60 F8 00 C4 C1 25 45 99 79 E9 23 87 04 00 4F 31 63 8D (Note the 78 9C pattern that is present in zlib.NET, but not in boost). Furthermore, when I decompress data in boost that I compressed in zlib.NET, I am not able to read from the stream suggesting something is wrong. It does work when I try to decompress data compressed in boost. Does anybody know what is going wrong? Thank you, Johan

    Read the article

  • How to filter rows in JTable based on boolean valued columns?

    - by vinny
    Im trying to filter rows based on a column say c1 that contains boolean values. I want to show only rows that have 'true' in c1. I looked up the examples in http://java.sun.com/docs/books/tutorial/uiswing/components/table.html#sorting. The example uses a regex filter. Is there any way I can use boolean values to filter rows? Following is the code Im using (borrowed from the example) private void filter(boolean show) { RowFilter<TableModel, Object> filter = null; TableModel model = jTb.getModel(); boolean value = (Boolean) model.getValueAt(0,1); //If current expression doesn't parse, don't update. try { // I need to used 'value' to filter instead of filterText. filter =RowFilter.regexFilter(filterText, 0); } catch (java.util.regex.PatternSyntaxException e) { return; } sorter.setRowFilter(filter); } thank you.

    Read the article

  • HttpWebRequest Cookie weirdness

    - by Lachman
    I'm sure I must be doing something wrong. But can't for the life of me figure out what is going on. I have a problem where it seems that the HttpWebRequest class in the framework is not correctly parsing the cookies from a web response. I'm using Fiddler to see what is going on and after making a request, the headers of the response look as such: HTTP/1.1 200 Ok Connection: close Date: Wed, 14 Jan 2009 18:20:31 GMT Server: Microsoft-IIS/6.0 P3P: policyref="/w3c/p3p.xml", CP="CAO DSP IND COR ADM CONo CUR CUSi DEV PSA PSD DELi OUR COM NAV PHY ONL PUR UNI" Set-Cookie: user=v.5,0,EX01E508801E$97$2E401000t$1BV6$A1$EC$104$A1$EC$104$A1$EC$104$21O001000$1E31!90$7CP$AE$3F$F3$D8$19o$BC$1Cd$23; Domain=.thedomain.com; path=/ Set-Cookie: minfo=v.4,EX019ECD28D6k$A3$CA$0C$CE$A2$D6$AD$D4!2$8A$EF$E8n$91$96$E1$D7$C8$0F$98$AA$ED$DC$40V$AB$9C$C1$9CF$C9$C1zIF$3A$93$C6$A7$DF$A1$7E$A7$A1$A8$BD$A6$94c$D5$E8$2F$F4$AF$A2$DF$80$89$BA$BBd$F6$2C$B6$A8; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: accttype=v.2,3,1,EX017E651B09k$A3$CA$0C$DB$A2$CB$AD$D9$8A$8C$EF$E8t$91$90$E1$DC$C89$98$AA$E0$DC$40O$A8$A4$C1$9C; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: tpid=v.1,20001; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: MC1=GUID=541977e04a341a2a4f4cdaaf49615487; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: linfo=v.4,EQC|0|0|255|1|0||||||||0|0|0||0|0|0|-1|-1; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: group=v.1,0; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Content-Type: text/html But when I look at the response.Cookies, I see far more cookies that I am expecting, with values of different cookies being split up into different cookies. Manually getting the headers seems to result in more wierdness eg: the code foreach(string cookie in response.Headers.GetValues("Set-Cookie")) { Console.WriteLine("Cookie found: " + cookie); } produces the output: Cookie found: user=v.5 Cookie found: 0 Cookie found: EX01E508801E$97$2E401000t$1BV6$A1$EC$104$A1$EC$104$A1$EC$104$21O00 1000$1E31!90$7CP$AE$3F$F3$D8$19o$BC$1Cd$23; Domain=.thedomain.com; path=/ Cookie found: minfo=v.4 Cookie found: EX019ECD28D6k$A3$CA$0C$CE$A2$D6$AD$D4!2$8A$EF$E8n$91$96$E1$D7$C8$0 F$98$AA$ED$DC$40V$AB$9C$C1$9CF$C9$C1zIF$3A$93$C6$A7$DF$A1$7E$A7$A1$A8$BD$A6$94c$ D5$E8$2F$F4$AF$A2$DF$80$89$BA$BBd$F6$2C$B6$A8; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: accttype=v.2 Cookie found: 3 Cookie found: 1 Cookie found: EX017E651B09k$A3$CA$0C$DB$A2$CB$AD$D9$8A$8C$EF$E8t$91$90$E1$DC$C89 $98$AA$E0$DC$40O$A8$A4$C1$9C; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: tpid=v.1 Cookie found: 20001; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: MC1=GUID=541977e04a341a2a4f4cdaaf49615487; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: linfo=v.4 Cookie found: EQC|0|0|255|1|0||||||||0|0|0||0|0|0|-1|-1; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: group=v.1 Cookie found: 0; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ as you can see - the first cookie in the list raw response: Set-Cookie: user=v.5,0,EX01E508801 is getting split into: Cookie found: user=v.5 Cookie found: 0 Cookie found: EX01E508801E$.......... So - what's going on here? Am I wrong? Is the HttpWebRequest class incorrectly parsing the http headers? Is the webserver that it spitting out the requests producing invalid http headers?

    Read the article

  • Can you hep me with my Perl homework?

    - by riya
    Could someone write simple Perl programs for the following scenarios: convert a list from {1,2,3,4,5,7,9,10,11,12,34} to {1-5,7,9-12,34} to sort a list of negative numbers to insert values to hash array there is a file with content: C1 c2 c3 c4 r1 r2 r3 r4 put it into an hash array where keys = {c1,c2,c3,c4} and values = {r1,r2,r3,r4} There are testcases running each testcase runs as a process and has a process ID. The logs are logged in a logfile process ID appended to each line. Prog to find out if the test case has passed or failed. The program shoud be running till the processes are running and display output.

    Read the article

  • Git - tidying up a repo

    - by Simon Woods
    Hi I have got my repo into a bit of a state and want to be able to work my way out of it The repo looks a bit like this (A1, B1, C1 etc are obviously commits) A1 ---- A2 ---- A3 ---- A4 ---- A5 ---- A6 ---- A7 ---- A8 / (from a remote repo) B1 ---- B2 --------------------------------- | \ \ C1 ---------------------------------C2 \ / D1 --- D2 --- D3 --- D4 --- D5 --- D6 Ideally I'd like to be able to remove all the revisions (with rebase?) on the B, C and D lines (I'm loathed to say branches simply because there are now no local branches on these lines except ref branches to the remote repo) and try to merge in the remote repo again, perhaps in a better way. I'd be grateful of any suggestions as to how to get rid of all these commits. Could I ask that any answers use revision SHA1s rather than branch names. I thought that somehow I'd be able to revert the merge into A7 but can't quite work out how to do it I hope that is sufficient information. Many thx Simon

    Read the article

  • Proving that a function f(n) belongs to a Big-Theta(g(n))

    - by PLS
    Its a exercise that ask to indicate the class Big-Theta(g(n)) the functions belongs to and to prove the assertion. In this case f(n) = (n^2+1)^10 By definition f(n) E Big-Theta(g(n)) <= c1*g(n) < f(n) < c2*g(n), where c1 and c2 are two constants. I know that for this specific f(n) the Big-Theta is g(n^20) but I don't know who to prove it properly. I guess I need to manipulate this inequality but I don't know how

    Read the article

  • Getting ID of an instance newly launched with ec2-api-tools

    - by Jonik
    I'm launching an EC2 instance, by invoking ec2-run-instances from simple a bash script, and want to perform further operations on that instance (e.g. associate elastic IP), for which I need the instance id. The command is something like ec2-run-instances ami-dd8ea5a9 -K pk.pem -C cert.pem --region eu-west-1 -t c1.medium -n 1, and its output: RESERVATION r-b6ea58c1 696664755663 default INSTANCE i-945af9e3 ami-dd8ea5b9 pending 0 c1.medium 2010-04-15T10:47:56+0000 eu-west-1a aki-b02a01c4 ari-39c2e94d In this example, i-945af9e3 is the id I'm after. So, I'd need a simple way to parse the id from what the command returns - how would you go about doing it? My AWK is a little rusty... Feel free to use any tool available on a typical Linux box. (If there's a way to get it directly using EC2-API-tools, all the better. But afaik there's no EC2 command to e.g. return the id of the most recently launched instance.)

    Read the article

  • B-V to Kelvin formula

    - by PeanutPower
    Whilst looking for a "B-V color index to temperature conversion formula" I found this javascript: var C1 = 3.979145; var C2 = -0.654499; var C3 = 1.74069; var C4 = -4.608815; var C5 = 6.7926; var C6 = -5.39691; var C7 = 2.19297; var C8 = -.359496; bmv = parseFloat(BV); with (Math) { logt= C1 +C2*bmv +C3*pow(bmv,2) +C4*pow(bmv,3) +C5*pow(bmv,4) +C6*pow(bmv,5) +C7*pow(bmv,6) +C8*pow(bmv,7); t=pow(10,logt); } Which is supposed to convert B-V color index to temperature. Does anyone understand how this is working and if the output value is an approximation for temperature in celcius or kelvin? Is it something to do with products of logarithms?

    Read the article

  • Oracle 10g - Best way to escape single quotes

    - by satynos
    I have to generate some update statements based off a table in our database. I created the following script which is generating the update statements I need. But when I try to run those scripts I am getting errors pertaining to unescaped single quotes in the content and &B, &T characters which have special meaning in oracle. I took care of the &B and &T problem by setting SET DEFINE OFF. Whats the best way to escape single quotes within the content? DECLARE CURSOR C1 IS SELECT * FROM EMPLOYEES; BEGIN FOR I IN C1 LOOP DBMS_OUTPUT.PUT_LINE('UPDATE EMPLOYEES SET FIRST_NAME= ''' || I.FIRST_NAME|| ''', LAST_NAME = ''' || I.LAST_NAME ''', DOB = ''' || I.DOB|| ''' WHERE EMPLOYEE_ID = ''' || I.EMPLOYEE_ID || ''';'); END LOOP; END; Here if the first_name or last_name contains single quotes then the generated update statements break. Whats the best way to escape those single quotes within the first_name and last_name?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >