Search Results

Search found 7580 results on 304 pages for 'coordinate systems'.

Page 40/304 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Find the new coordinates using a starting point, a distance, and an angle

    - by dqhendricks
    Okay, say I have a point coordinate. var coordinate = { x: 10, y: 20 }; Now I also have a distance and an angle. var distance = 20; var angle = 72; The problem I am trying to solve is, if I want to travel 20 points in the direction of angle from the starting coordinate, how can I find what my new coordinates will be? I know the answer involves things like sin/cosin, because I used to know how to do this, but I have since forgotten the formula. Can anyone help?

    Read the article

  • implementing a high level function in a script to call a low level function in the game engine

    - by eat_a_lemon
    In my 2d game engine I have a function that does pathfinding, find_shortest_path. It executes for each time step in the game loop and calculates the next coordinate pair in the series of coordinates to reach the destination object. Now I want to call this function in a scripting language and have it only return the last coordinate pair result. I want the game engine to go about the business of rendering the incremental steps but I don't want the high level script to care about the rendering. The high level script is only for ai game logic. Now I know how to bind a method from C to python but how can I signal and coordinate the wait time between the incremental steps without the high level function returning until its time for the last step?

    Read the article

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

  • Ray Wang: Why engagement matters in an era of customer experience

    - by Michael Snow
    Why engagement matters in an era of customer experience R "Ray" Wang Principal Analyst & CEO, Constellation Research Mobile enterprise, social business, cloud computing, advanced analytics, and unified communications are converging. Armed with the art of the possible, innovators are seeking to apply disruptive consumer technologies to enterprise class uses — call it the consumerization of IT in the enterprise. The likely results include new methods of furthering relationships, crafting longer term engagement, and creating transformational business models. It's part of a shift from transactional systems to engagement systems. These transactional systems have been around since the 1950s. You know them as ERP, finance and accounting systems, or even payroll. These systems are designed for massive computational scale; users find them rigid and techie. Meanwhile, we've moved to new engagement systems such as Facebook and Twitter in the consumer world. The rich usability and intuitive design reflect how users want to work — and now users are coming to expect the same paradigms and designs in their enterprise world. ~~~ Ray is a prolific contributor to his own blog as well as others. For a sneak peak at Ray's thoughts on engagement, take a look at this quick teaser on Avoiding Social Media Fatigue Through Engagement Or perhaps you might agree with Ray on Dealing With The Real Problem In Social Business Adoption – The People! Check out Ray's post on the Harvard Business Review Blog to get his perspective on "How to Engage Your Customers and Employees." For a daily dose of Ray - follow him on Twitter: @rwang0 But MOST IMPORTANTLY.... Don't miss the opportunity to join leading industry analyst, R "Ray" Wang of Constellation Research in the latest webcast of the Oracle Social Business Thought Leaders Series as he explains how to apply the 9 C's of Engagement for both your customers and employees.

    Read the article

  • WPF 3D - converting from Point2D to Point3D and back again

    - by DanM
    I'm new to WPF 3D, so I may just be missing something obvious, but how would I go about converting from a 2D coordinate to a 3D coordinate and back again? I'd like the 2D coordinate to be the location measured from the upper-left corner of Viewport3D and the 3D coordinate to be the location relative to the origin (0, 0, 0) of the 3D world. The conversion functions should have these signatures: public Point3D Point2DAndWorldZToPoint3D(Point2D point2D, double worldZ) // usually I want to know where a 2D point will be on the ground plane // so worldZ will usually be zero (but not always) public Point2D Point3DToPoint2D(Point3D point3D) I found this related question, but it only addresses conversion from 3D to 2D (not the reverse), and I'm not sure if the answers are up-to-date. Note, I'm currently using .NET 3.5, but if there are improvements in .NET 4.0 that would help me, please let me know.

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • Application Architect vs. Systems Architect vs. Enterprise Architect?

    - by iaman00b
    So many buzzwords. Not sure if I need to start playing BS Bingo or not. And I'm not trying to be cynical. But I've heard many people with these various titles. There never seems to be a clear delineation between the three. Or there's a lot of domain crossover between the three. Actually, another I've seen while looking around here on Stackoverflow has been "Solutions Architect" as well. But that one doesn't seem to be so prevalent in other places. There are questions here and there with vague answers. But I'd like definative answers to this. Please assume I'm still relatively new to software stuff and that I'm trying to map out a career path. Oh, and please be gentle folks; this most definitely is not a duplicate question. Neither is it an aggregate. So kindly leave it alone. Xp

    Read the article

  • What operating systems available for an 8-bit microprocessor?

    - by Benoit
    It does not need to be a full fledged OS, but at least have multitasking capabilities (i.e. a scheduler). Please mention what processor architecture it works on. This is a survey, so exact capabilities are not really important. Think of this as being a place to look at for possibilities when your next 8-bit embedded project comes up... I realize that most 8-bit micro do not require an OS, but just as a counter-example, Rabbit Semiconductor offers the RCM3710 processor module with 4 serial ports, a 10-BaseT ethernet port, 512K RAM and 512K Flash. All that for $39. All based around an 8-bit Z80 core. 8-bit does NOT necessarily mean extreme resource constraint.

    Read the article

  • MacRuby - CLLocation Properties Not Accessible

    - by Craig Williams
    Anyone know why this works in Objective-C but not in MacRuby? Objective-C Version: CLLocation *loc = [[CLLocation alloc] initWithLatitude:38.0 longitude:-122.0]; NSLog(@"Lat: %.2f", loc.coordinate.latitude); NSLog(@"Long: %.2f", loc.coordinate.longitude); [loc release]; // Results: // 2010-04-30 16:48:55.568 OCCoreLocationTest[70030:a0f] Lat: 38.00 // 2010-04-30 16:48:55.570 OCCoreLocationTest[70030:a0f] Long: -122.00 Here is the MacRuby version with results: loc = CLLocation.alloc.initWithLatitude(38.0, longitude:-122.0) puts loc.class # => CLLocation puts loc.description # => <+38.00000000, -122.00000000> +/- 0.00m (speed -1.00 mps / course -1.00) @ 2010-04-30 16:37:47 -0600 puts loc.respond_to?(:coordinate) # => true puts loc.coordinate.latitude # => Error: unrecognized runtime type `{?=dd}' (TypeError)

    Read the article

  • What are the prerequisites for learning embedded systems programming ?

    - by WarDoGG
    I have completed my graduation in Computer engineering. We had some basic electronics courses in Digital signal processing, Information theory etc but my primary field is Programming. However, i was looking to get into Embedded sytems programming with NO knowledge of how it is done. However, i am very keen on going into this field. My questions : what are the languages used to program embedded system programs ? Will i be able to learn without having any basics in electronics ? any other prerequisites that i should know ?

    Read the article

  • How to program critical section for reader-writer systems?

    - by Srinivas Nayak
    Hi, Lets say, I have a reader-writer system where reader and writer are concurrently running. 'a' and 'b' are two shared variables, which are related to each other, so modification to them needs to be an atomic operation. A reader-writer system can be of the following types: rr ww r-w r-ww rr-w rr-ww where [ r : single reader rr: multiple reader w : single writer ww: multiple writer ] Now, We can have a read method for a reader and a write method for a writer as follows. I have written them system type wise. rr read_method { read a; read b; } ww write_method { lock(m); write a; write b; unlock(m); } r-w r-ww rr-w rr-ww read_method { lock(m); read a; read b; unlock(m); } write_method { lock(m); write a; write b; unlock(m); } For multiple reader system, shared variable access doesn't need to be atomic. For multiple writer system, shared variable access need to be atomic, so locked with 'm'. But, for system types 3 to 6, is my read_method and write_method correct? How can I improve? Sincerely, Srinivas Nayak

    Read the article

  • What resources will help me understand the fundamentals of Relational Database Systems.

    - by Rachel
    This are few of the fundamental database questions which has always given me trouble. I have tried using google and wiki but I somehow I miss out on understanding the functionality rather than terminology. If possible would really appreciate if someone can share more insights on this questions using some visual representative examples. What is a key? A candidate key? A primary key? An alternate key? A foreign key? What is an index and how does it help your database? What are the data types available and when to use which ones?

    Read the article

  • Why is Read-Modify-Write necessary for registers on embedded systems?

    - by Adam Shiemke
    I was reading http://embeddedgurus.com/embedded-bridge/2010/03/different-bit-types-in-different-registers/, which said: With read/write bits, firmware sets and clears bits when needed. It typically first reads the register, modifies the desired bit, then writes the modified value back out and I have run into that consrtuct while maintaining some production code coded by old salt embedded guys here. I don't understand why this is necessary. When I want to set/clear a bit, I always just or/nand with a bitmask. To my mind, this solves any threadsafe problems, since I assume setting (either by assignment or oring with a mask) a register only takes one cycle. On the other hand, if you first read the register, then modify, then write, an interrupt happening between the read and write may result in writing an old value to the register. So why read-modify-write? Is it still necessary?

    Read the article

  • How do faces in .obj work?

    - by Adl
    Hi When parsing an .obj-file, with vertices and vertex-faces, it is easy to pass the vertices to the shader and the use glDrawElements using the vertex-faces. When parsing an .obj-file, with vertices and texture-coordinates, another type of face occur: texture-coordinate faces. When displaying textures, apart from loading images, binding them and passing texture coordinates into the parser, how to use the texture-coordinate faces? They differ from the vertex-faces and I suppose that the texture-coordinate faces have a purpose when displaying textures? Regards Niclas

    Read the article

  • Are indivisible operations still indivisible on multiprocessor and multicore systems?

    - by Steve314
    As per the title, plus what are the limitations and gotchas. For example, on x86 processors, alignment for most data types is optional - an optimisation rather than a requirement. That means that a pointer may be stored at an unaligned address, which in turn means that pointer might be split over a cache page boundary. Obviously this could be done if you work hard enough on any processor (picking out particular bytes etc), but not in a way where you'd still expect the write operation to be indivisible. I seriously doubt that a multicore processor can ensure that other cores can guarantee a consistent all-before or all-after view of a written pointer in this unaligned-write-crossing-a-page-boundary situation. Am I right? And are there any similar gotchas I haven't thought of?

    Read the article

  • Exposing a service to external systems - How should I design the contract?

    - by Larsi
    Hi! I know this question is been asked before here but still I'm not sure what to select. My service will be called from many 3 party system in the enterprise. I'm almost sure the information the service will collect (MyBigClassWithAllInfo) will change during the products lifetime. Is it still a good idea to expose objects? This is basically what my two alternatives: [ServiceContract] public interface ICollectStuffService { [OperationContract] SetDataResponseMsg SetData(SetDataRequestMsg dataRequestMsg); } // Alternative 1: Put all data inside a xml file [DataContract] public class SetDataRequestMsg { [DataMember] public string Body { get; set; } [DataMember] public string OtherPropertiesThatMightBeHandy { get; set; } // ?? } // Alternative 2: Expose the objects [DataContract] public class SetDataRequestMsg { [DataMember] public Header Header { get; set; } [DataMember] public MyBigClassWithAllInfo ExposedObject { get; set; } } public class SetDataResponseMsg { [DataMember] public ServiceError Error { get; set; } } The xml file would look like this: <?xml version="1.0" encoding="utf-8"?> <Message>   <Header>     <InfoAboutTheSender>...</InfoAboutTheSender>   </Header>   <StuffToCollectWithAllTheInfo>   <stuff1>...</stuff1> </StuffToCollectWithAllTheInfo> </Message> Any thought on how this service should be implemented? Thanks Larsi

    Read the article

  • pass array by reference in c

    - by Yassir
    How can I pass an array of struct by reference ? example : struct Coordinate { int X; int Y; }; SomeMethod(Coordinate *Coordinates[]){ //Do Something with the array } int main(){ Coordinate Coordinates[10]; SomeMethod(&Coordinates); }

    Read the article

  • How are operating Systems created? Which language is chosen for coding?

    - by Nitesh Panchal
    Hello, How is a basic OS created? In which language do programmers code for OS? C or Assembly? or which? Also, Assembly has limited instruction set like mov etc. So how can anybody create OS in assembly? and even C has limited functionality. But it is said to be the mother of all languages. How can anybody create a full OS with stunning graphics in C? It's simply out of my mind. And what is time duration it takes for a very basic OS to be created? unlike Windows 7 :p

    Read the article

  • How to solve High Load average issue in Linux systems?

    - by RoCkStUnNeRs
    The following is the different load with cpu time in different time limit . The below output has parsed from the top command. TIME LOAD US SY NICE ID WA HI SI ST 12:02:27 208.28 4.2%us 1.0%sy 0.2%ni 93.9%id 0.7%wa 0.0%hi 0.0%si 0.0%st 12:23:22 195.48 4.2%us 1.0%sy 0.2%ni 93.9%id 0.7%wa 0.0%hi 0.0%si 0.0%st 12:34:55 199.15 4.2%us 1.0%sy 0.2%ni 93.9%id 0.7%wa 0.0%hi 0.0%si 0.0%st 13:41:50 203.66 4.2%us 1.0%sy 0.2%ni 93.8%id 0.8%wa 0.0%hi 0.0%si 0.0%st 13:42:58 278.63 4.2%us 1.0%sy 0.2%ni 93.8%id 0.8%wa 0.0%hi 0.0%si 0.0%st Following is the additional Information of the system? cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz stepping : 10 cpu MHz : 1992.000 cache size : 6144 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc arch_perfmon pebs bts pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca sse4_1 lahf_lm bogomips : 4658.69 clflush size : 64 power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz stepping : 10 cpu MHz : 1992.000 cache size : 6144 KB physical id : 0 siblings : 4 core id : 1 cpu cores : 4 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc arch_perfmon pebs bts pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca sse4_1 lahf_lm bogomips : 4655.00 clflush size : 64 power management: processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz stepping : 10 cpu MHz : 1992.000 cache size : 6144 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 4 apicid : 2 initial apicid : 2 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc arch_perfmon pebs bts pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca sse4_1 lahf_lm bogomips : 4655.00 clflush size : 64 power management: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz stepping : 10 cpu MHz : 1992.000 cache size : 6144 KB physical id : 0 siblings : 4 core id : 3 cpu cores : 4 apicid : 3 initial apicid : 3 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc arch_perfmon pebs bts pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca sse4_1 lahf_lm bogomips : 4654.99 clflush size : 64 power management: Memory: total used free shared buffers cached Mem: 2 1 1 0 0 0 Swap: 5 0 5 let me know why the system is getting abnormally this much high load?

    Read the article

  • View results of affine transform

    - by stckjp
    I am trying to find out the reason why when I apply affine transformations on an image in OpenCV, the result of it is not visible in the preview window, but the entire window is black.How can I find workaround for this problem so that I can always view my transformed image (the result of the affine transform) in the window no matter the applied transformation? Update: I think that this happens because all the transformations are calculated with respect to the origin of the coordinate system (top left corner of the image). While for rotation I can specify the center of the rotation, and I am able to view the result, when I perform scaling I am not able to control where the transformed image goes. Is it possible to somehow move the coordinate system to make the image fit in the window? Update2: I have an image which contains only ROI at some position in it (the rest of the image is black), and I need to apply a set of affine transforms on it. To make things simpler and to see the effect of each individual transform, I applied each transform one by one. What I noticed is that, whenever I move (translate) the image such that the center of the ROI is in the center of the coordinate system (top left corner of the view window), all the affine transforms perform correctly without moving. However, by translating the center of ROI at the center of the coordinate system, the upper and the left part of the ROI remain cut out of the current view window. If I move ROI's central point to another point in the view window (for example the window center), an affine transform of type: A=[a 0 0; 0 b 0] (A is 2x3 matrix, parameter of the warpAffine function) moves the image (ROI), outside of the view window (which doesn't happen if the ROI's center is in the top-left corner). How can I modify the affine transform so the image doesn't move out of its place (behaves the same way as when the ROI center is in the center of the coordinate system)?

    Read the article

  • how do copyright permission systems for content hosting sites work?

    - by zebraman
    I am wondering about subscription sites that host content, like recorded performances from concerts. I'm sure there is a tangle of copyright permissions that must be granted for these video/audio files to be hosted. For example, if a band plays a cover of another band's song, permission must be obtained from not only the band that performed, but the band that owns the song. Perhaps even from the venue that hosted the performance, to record the video and post the content. I am curious how websites that host content like this work. How might an automated copyright system work to keep track of who has ownership of certain performances and obtain permission from said owners to record and post their content.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >