Search Results

Search found 18865 results on 755 pages for 'distinct values'.

Page 207/755 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • How to enable CDR on AsteriskNow 1.5

    - by Michal Niklas
    I have upgraded PBX to Asterisk 1.6.2.7 and now CDR files are not created. It looks that such logging is disabled: Connected to Asterisk 1.6.2.7 currently running on pbx2 (pid = 5824) Verbosity is at least 3 pbx2*CLI> cdr show status pbx2*CLI> Call Detail Record (CDR) settings ---------------------------------- Logging: Disabled Mode: Simple Asterisk shows that CDR modules are loaded: pbx2*CLI> module show like cd Module Description Use Count cdr_manager.so Asterisk Manager Interface CDR Backend 0 cdr_csv.so Comma Separated Values CDR Backend 0 app_cdr.so Tell Asterisk to not maintain a CDR for 0 app_forkcdr.so Fork The CDR into 2 separate entities 0 func_cdr.so Call Detail Record (CDR) dialplan functi 0 cdr_custom.so Customizable Comma Separated Values CDR 0 6 modules loaded How to enable creating CDR csv files?

    Read the article

  • Passing multiple Vertex Attributes in GLSL 130

    - by Roy T.
    (note this question is closely related to this one however I didn't fully understand the accepted answer) To support videocards in laptops I have to rewrite my GLSL 330 shaders to GLSL 130. I'm trying to do this but somehow I don't get vertex attributes to work properly. My 330 shaders look like this: #version 330 layout(location = 0) in vec4 position; layout(location = 3) in vec4 color; smooth out vec4 theColor; void main() { gl_Position = position; theColor = color; } Now this explicit layout is not allowed in GLSL 130 so I referenced this page to see what the default layouts for some values would be. As you can see position should be the 0th vertex attribute and color should be the 3rd vertex attribute. Because this is a test case I had already configured my explicit layouts in the same way, which worked, so I now simply rewrote my shader to this and expected it to work: #version 130 smooth out vec4 theColor; void main() { gl_Position = gl_Vertex; theColor = gl_Color; } However this doesn't work, the value of gl_Color is always (1,1,1,1). So how should I pass multiple vertex attributes to my GLSL 130 shaders? For reference, this is how I set my vertex buffer object and attributes (I've just adapted this tutorial to JAVA+JOGL) gl.glBindBuffer(GL3.GL_ARRAY_BUFFER, vertex_buffer_id); gl.glEnableVertexAttribArray(0); gl.glEnableVertexAttribArray(3); gl.glVertexAttribPointer(0, 4 , GL3.GL_FLOAT, false, 0, 0); gl.glVertexAttribPointer(3, 4, GL3.GL_FLOAT, false, 0, 4*4*4); gl.glDrawArrays(GL3.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableVertexAttribArray(0); gl.glDisableVertexAttribArray(3); EDIT I solved the problem by querying for the layout locations of position an color using glGetAttribLocation however I still don't understand why the 'hardcoded' values like gl_Color didn't work, can't I upload data in there as normal? Shouldn't they be used?

    Read the article

  • After restoring a SQL Server database from another server - get login fails

    - by Renso
    Issue: After you have restored a sql server database from another server, lets say from production to a Q/A environment, you get the "Login Fails" message for your service account. Reason: User logon information is stored in the syslogins table in the master database. By changing servers, or by altering this information by rebuilding or restoring an old version of the master database, the information may be different from when the user database dump was created. If logons do not exist for the users, they will receive an error indicating "Login failed" while attempting to log on to the server. If the user logons do exist, but the SUID values (for 6.x) or SID values (for 7.0) in master..syslogins and the sysusers table in the user database differ, the users may have different permissions than expected in the user database. Solution: Links a user entry in the sys.database_principals system catalog view in the current database to a SQL Server login of the same name. If a login with the same name does not exist, one will be created. Examine the result from the Auto_Fix statement to confirm that the correct link is in fact made. Avoid using Auto_Fix in security-sensitive situations. When you use Auto_Fix, you must specify user and password if the login does not already exist, otherwise you must specify user but password will be ignored. login must be NULL. user must be a valid user in the current database. The login cannot have another user mapped to it. execute the following stored procedure, in this example the login user name is "MyUser" exec sp_change_users_login 'Auto_Fix', 'MyUser'   NOTE: sp_change_users_login cannot be used with a SQL Server login created from a Windows principal or with a user created by using CREATE USER WITHOUT LOGIN.

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Overriding GetHashCode in a mutable struct - What NOT to do?

    - by Kyle Baran
    I am using the XNA Framework to make a learning project. It has a Point struct which exposes an X and Y value; for the purpose of optimization, it breaks the rules for proper struct design, since its a mutable struct. As Marc Gravell, John Skeet, and Eric Lippert point out in their respective posts about GetHashCode() (which Point overrides), this is a rather bad thing, since if an object's values change while its contained in a hashmap (ie, LINQ queries), it can become "lost". However, I am making my own Point3D struct, following the design of Point as a guideline. Thus, it too is a mutable struct which overrides GetHashCode(). The only difference is that mine exposes and int for X, Y, and Z values, but is fundamentally the same. The signatures are below: public struct Point3D : IEquatable<Point3D> { public int X; public int Y; public int Z; public static bool operator !=(Point3D a, Point3D b) { } public static bool operator ==(Point3D a, Point3D b) { } public Point3D Zero { get; } public override int GetHashCode() { } public override bool Equals(object obj) { } public bool Equals(Point3D other) { } public override string ToString() { } } I have tried to break my struct in the way they describe, namely by storing it in a List<Point3D>, as well as changing the value via a method using ref, but I did not encounter they behavior they warn about (maybe a pointer might allow me to break it?). Am I being too cautious in my approach, or should I be okay to use it as is?

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • Problem with SLATEC routine usage with gfortran

    - by user39461
    I am trying to compute the Bessel function of the second kind (Bessel_y) using the SLATEC's Amos library available on Netlib. Here is the SLATEC code I use. Below I have pasted my test program that calls SLATEC routine CBESY. PROGRAM BESSELTEST IMPLICIT NONE REAL:: FNU INTEGER, PARAMETER :: N = 2, KODE = 1 COMPLEX,ALLOCATABLE :: CWRK (:), CY (:) COMPLEX:: Z, ci INTEGER :: NZ, IERR ALLOCATE(CWRK(N), CY(N)) ci = cmplx (0.0, 1.0) FNU = 0.0e0 Z = CMPLX(0.3e0, 0.4e0) CALL CBESY(Z, FNU, KODE, N, CY, NZ, CWRK, IERR) WRITE(*,*) 'CY: ', CY WRITE(*,*) 'IERR: ', IERR STOP END PROGRAM And here is the output of the above program: CY: ( 5.78591091E-39, 5.80327020E-39) ( 0.0000000 , 0.0000000 ) IERR: 4 Ierr = 4 meaning there is some problem with the input itself. To be precise, the IERR = 4 means the following as per the header info in CBESY.f file: ! IERR=4, CABS(Z) OR FNU+N-1 TOO LARGE - NO COMPUTA- ! TION BECAUSE OF COMPLETE LOSSES OF SIGNIFI- ! CANCE BY ARGUMENT REDUCTION Clearly, CABS(Z) (which is 0.50) or FNU + N - 1 (which is 1.0) are not too large but still the routine CBESY throws the error message number 4 as above. The CY array should have following values for the argument given in above code: CY(1) = -0.4983 + 0.6700i CY(2) = -1.0149 + 0.9485i These values are computed using Matlab. I can't figure out what's the problem when I call CBESY from SLATEC library. Any clues? Much thanks for the suggestions/help. PS: if it is of any help, I used gfortran to compile, link and then create the SLATEC library file ( the .a file ) which I keep in the same directory as my test program above. shell command to execute above code: gfortran -c BesselTest.f95 gfortran -o a *.o libslatec.a a GD.

    Read the article

  • How do I lookup a 'quantity' of items in excel?

    - by KronoS
    Let's say I have a quatity of items: 1 2 3 4 5 4 3 2 1 2 3 4 in a column of cells. What I want to be able to do is count the quantity how many unique "items" there are in this array: 1 -- 2 2 -- 3 3 -- 3 4 .. 3 And so forth. I want the table to look like this: Also, is there a way to accomplish this if I don't know all of the values of the array to begin with? I'm looking for a way to have excel search an array, find a unique value, count how many times that value is in the array, and then move onto the next values.

    Read the article

  • APC (PHP Cache) Uptime 0 minutes, not caching

    - by Jussi
    My goal is to implement APC for opcode cache for a drupal 6 production site. I have so far tested APC with several php files with and without including other php files with include_once. Also tried to tweak the apc.ini values for shm_size, apc.include_once_override and apc.stat. Restarted apache every time. Resulting in apc.php not showing any changes in any values. (except of course the changed apc.ini values are shown as they should) Every time i refresh the apc.php test page, the start time resets as the current time showing uptime 0 minutes. apc.php -testpage shows: General Cache InformationAPC Version 3.1.9 PHP Version 5.2.10 APC Host xxxx.xx.xx Server Software Apache/2.2.3 (CentOS) Shared Memory 1 Segment(s) with 128.0 MBytes (mmap memory, pthread mutex Locks locking) Start Time 2011/07/26 11:53:56 Uptime 0 minutes File Upload Support 1 Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 1 Request Rate (hits, misses) 2.00 cache requests/second Hit Rate 1.00 cache requests/second Miss Rate 1.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 16 apc.mmap_file_mask /tmp/apcphp5.095eRm apc.num_files_hint 1024 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.slam_defense 0 apc.stat 0 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Host Status Diagrams: Free: 128.0 MBytes (100.0%) Hits: 1 (50.0%) Used: 20.3 KBytes (0.0%) Misses: 1 (50.0%) Detailed Memory Usage and Fragmentation: Fragmentation: 0% phpinfo shows: Server API CGI/FastCGI APC: Version 3.1.9 APC Debugging Enabled MMAP Support Enabled MMAP File Mask /tmp/apcphp5.JkKDk7 Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Jul 21 2011 14:31:12 I followed these steps to find if suexec settings would prevent caching: http://www.litespeedtech.com/support/forum/showthread.php?t=4189 [root@host /]# ps -ef|grep lsphp root 20402 17833 0 11:21 pts/0 00:00:00 grep lsphp [root@host /]# ps -waux root 17833 0.0 0.1 5004 1484 pts/0 S 10:39 0:00 bash ..indicates that there is no lsphp running on the host also I read the following article and comments, concluding that in my case the problem is not the suexec as the user apache is the httpd process owner http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ also suexec command is not recognized when logged and launced as root @ host also i'm almost confident that there is no cPanel running on the host to check if a setting there would reset the running cache process at some interval This leaves me with few clues where to head next. I tried to set (with chown and chgrp) apache as the owner of the apc.php file and some test php files resulting in 500 server error. Is there a way to check if the file permissions prevent the apc stay running? I'm tremendously grateful for any suggestions or help.

    Read the article

  • How to cross-reference many character encodings with ASCII OR UTFx?

    - by Garet Claborn
    I'm working with a binary structure, the goal of which is to index the significance of specific bits for any character encoding so that we may trigger events while doing specific checks against the profile. Each character encoding scheme has an associated system record. This record's leading value will be a C++ unsigned long long binary value and signifies the length, in bits, of encoded characters. Following the length are three values, each is a bit field of that length. offset_mask - defines the occurrence of non-printable characters within the min,max of print_mask range_mask - defines the occurrence of the most popular 50% of printable characters print_mask - defines the occurrence value of printable characters The structure of profiles has changed from the op of this question. Most likely I will try to factorize or compress these values in the long-term instead of starting out with ranges after reading more. I have to write some of the core functionality for these main reasons. It has to fit into a particular event architecture we are using, Better understanding of character encoding. I'm about to need it. Integrating into non-linear design is excluding many libraries without special hooks. I'm unsure if there is a standard, cross-encoding mechanism for communicating such data already. I'm just starting to look into how chardet might do profiling as suggested by @amon. The Unicode BOM would be easily enough (for my current project) if all encodings were Unicode. Of course ideally, one would like to support all encodings, but I'm not asking about implementation - only the general case. How can these profiles be efficiently populated, to produce a set of bitmasks which we can use to match strings with common characters in multiple languages? If you have any editing suggestions please feel free, I am a lightweight when it comes to localization, which is why I'm trying to reach out to the more experienced. Any caveats you may be able to help with will be appreciated.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • MVC OnActionExecuting to Redirect

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/12/mvc-onactionexecuting-to-redirect.aspxI recently had the following requirements in an MVC application: Given a new user that still has the default password When they first login Then the user must change their password and optionally provide contact information I found that I can override the OnActionExecuting method in a BaseController class.public class BaseController : Controller { [Inject] public ISessionManager SessionManager { get; set; } protected override void OnActionExecuting(ActionExecutingContext filterContext) { // call the base method first base.OnActionExecuting(filterContext); // if the user hasn't changed their password yet, force them to the welcome page if (!filterContext.RouteData.Values.ContainsValue("WelcomeNewUser")) { var currentUser = this.SessionManager.GetCurrentUser(); if (currentUser.FusionUser.IsPasswordChangeRequired) { filterContext.Result = new RedirectResult("/welcome"); } } } } Better yet, you can use an ActionFilterAttribute (and here) and apply the attribute to the Base or individual controllers./// <summary> /// Redirect the user to the WelcomePage if the FusionUser.IsPasswordChangeRequired is true; /// </summary> public class WelcomePageRedirectActionFilterAttribute : ActionFilterAttribute { [Inject] public ISessionManager SessionManager { get; set; } public override void OnActionExecuting(ActionExecutingContext actionContext) { base.OnActionExecuting(actionContext); // if the user hasn't changed their password yet, force them to the welcome page if (actionContext.RouteData.Values.ContainsValue("WelcomeNewUser")) { return; } var currentUser = this.SessionManager.GetCurrentUser(); if (currentUser.FusionUser.IsPasswordChangeRequired) { actionContext.Result = new RedirectResult("/welcome"); } } }  [WelcomePageRedirectActionFilterAttribute] public class BaseController : Controller { ... } The requirement is now met.

    Read the article

  • Office 365 Powershell - Export user, license type, and company field to csv file

    - by ASGJim
    I need to be able to export user name or email address (doesn't matter which), company (from the company field under the organization tab in a user account of the exchange admin console), and license type (e.g. exchange online e1, exchange online kiosk etc...) I am able to export both values in two statements into two separate files but that doesn't do me much good. I can export the username and license type with the following: Get-MSOLUser | % { $user=$_; $_.Licenses | Select {$user.displayname},AccountSKuid } | Export-CSV "sample.csv" -NoTypeInformation And, I can get the company values with the following: Get-User | select company | Export-CSV sample.csv Someone on another forum suggested this - $index = @{} Get-User | foreach-object {$index.Add($_.userprincipalname,$_.company)} Get-MsolUser | ForEach-Object { write-host $_.userprincipalname, $index[$_.userprincipalname], $_.licenses.AccountSku.Skupartnumber} That seems like it should work but it doesn't display any license info in my powershell, it's just blank. Also I wouldn't know how to export that to a csv file. Any help would be appreciated. Thanks.

    Read the article

  • How do I pipe a list of numbers straight from the shell into a command?

    - by learnvst
    How do I pipe a list of numbers straight from the shell into a command? For exampe something like this [1,2,3,4] | sort would give 1 2 3 4 EDIT: In response to the answers kindly posted so far . . . I ask this, because I want to quickly test and debug a console application that takes many numbers as it input without having to type lots of individual values followed by carriage returns. I'd like to just type in the 'one liner' and hit the up arrow now and then to replay the command. Ideally, I'd like to do this without using a text file containing the values (which would obviously be the most simple way to do this.)

    Read the article

  • MaxClients, Server Limits etc

    - by Moe
    Hello, I'm having some problems with my Server. It's getting quite a bit of traffic and is very slow, and sometimes inaccessible by my users. Here are the server specs: CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz - 16 Processors RAM: 2GB The Values for the Apache Config are: StartServers: 5 MaxSpareServers: 10 MinSpareServers: 5 MaxClients: 150 ServerLimit: 256 MaxRequestsPerChild: 1000 KeepAlive: On KeepAliveTimeout: 5 MaxKeepAliveRequests: 100 TimeOut: 300 What would be optiminal values for a server of my configuration to support the maximum amount of users at a reasonable speed without killing the server! Thank you.

    Read the article

  • MySQL replication hung after slave goes offline and comes back online again

    - by Ed Manet
    I have a master server and several slave servers replicating a single database. I am using in MySQL 5.0 in SLES 11. During fault tolerance testing I found that when the slave's network connection is broken (cable un-plugged) and then restored, replication hangs. It shows no errors and the slave appears to be running but the Read_Master_Log_Pos and Exec_Master_Log_Pos values do not match the log postion on the master server. The Slave_IO_State is "Waiting for master to send event". The Slave_IO_Running and Slave_SQL_Running values are both are "Yes". The Master_Log_File and Relay_Master_Log_File match. If I stop and start the slave or restart the mysql daemon, replication starts working again. Any ideas on what I can do about this?

    Read the article

  • Oracle parameter array binding from c# executed parallel and serial on different servers

    - by redir_dev_nut
    I have two Oracle 9i 64 bit servers, dev and prod. Calling a procedure from a c# app with parameter array binding, prod executes the procedure simultaneously for each value in the parameter array, but dev executes for each value serially. So, if the sproc does: select count(*) into cnt from mytable where id = 123; if cnt = 0 then insert into mytable (id) values (123); end if; Assuming the table initially does not have an id = 123 row. Dev gets cnt = 0 for the first array parameter value, then 1 for each of the subsequent. Prod gets cnt = 0 for all array parameter values and inserts id 123 for each. Is this a configuration difference, an illusion due to speed difference, something else?

    Read the article

  • ASP.NET 4.0- CompressionEnabled Property in session state 4.0

    - by Jalpesh P. Vadgama
    Hello Guys, This blog has been quite for few days. Because i was busy with some personal and professional work both and that’s why i am not able to work on writing blog posts which i have discovered in last few days. Here is one features of asp.net 4.0 that I am going to explain. As a web developer we all know about session. Without the use of session any database driven web application is incomplete. As we all know unlike windows form web forms are state less so when user interacts with web application we need to maintain state amongst web pages and we are using session for maintaining state between web pages for each users. ASP.NET is also provide same kind of session state functionalities. ASP.Net Session state identify request coming for same user and same browser for specific session time out interval and its preserves values in session for that specific time intervals and that’s help us in maintaining state amongst web pages for a specific user. ASP.NET Session state allows us to store session in three way 1. IncProc 2. Session State Service 3. SQL Server. In SQL Server mode it will store session in SQL Server tables instead of storing it in Server Memory. ASP.NET 4.0 provides a new property called Compression Enabled that means when we store values in serialized form in SQL Server with GZip Compression and that results in better performance. For that you need to store property in web.config like following. <sessionState allowCustomSqlDatabase="true" sqlConnectionString="data source=Server;Initial Catalog=aspnetsessionstatedb" compressionEnabled="true" /> That’s it now with the use of this property you can have better performance when you are storing large amount of data in session.But still you need to decide that why you want to stored large amount of data in session because its against best practices. Technorati Tags: Session,ASP.NET 4.0

    Read the article

  • Converting 2D Physics to 3D.

    - by static void main
    I'm new to game physics and I am trying to adapt a simple 2D ball simulation for a 3D simulation with the Java3D library. I have this problem: Two things: 1) I noted down the values generated by the engine: X/Y are too high and minX/minY/maxY/maxX values are causing trouble. Sometimes the balls are drawing but not moving Sometimes they are going out of the panel Sometimes they're moving on little area Sometimes they just stick at one place... 2) I'm unable to select/define/set the default correct/suitable values considering the 3D graphics scaling/resolution while they are set with respect to 2D screen coordinates, that is my only problem. Please help. This is the code: public class Ball extends GameObject { private float x, y; // Ball's center (x, y) private float speedX, speedY; // Ball's speed per step in x and y private float radius; // Ball's radius // Collision detected by collision detection and response algorithm? boolean collisionDetected = false; // If collision detected, the next state of the ball. // Otherwise, meaningless. private float nextX, nextY; private float nextSpeedX, nextSpeedY; private static final float BOX_WIDTH = 640; private static final float BOX_HEIGHT = 480; /** * Constructor The velocity is specified in polar coordinates of speed and * moveAngle (for user friendliness), in Graphics coordinates with an * inverted y-axis. */ public Ball(String name1,float x, float y, float radius, float speed, float angleInDegree, Color color) { this.x = x; this.y = y; // Convert velocity from polar to rectangular x and y. this.speedX = speed * (float) Math.cos(Math.toRadians(angleInDegree)); this.speedY = speed * (float) Math.sin(Math.toRadians(angleInDegree)); this.radius = radius; } public void move() { if (collisionDetected) { // Collision detected, use the values computed. x = nextX; y = nextY; speedX = nextSpeedX; speedY = nextSpeedY; } else { // No collision, move one step and no change in speed. x += speedX; y += speedY; } collisionDetected = false; // Clear the flag for the next step } public void collideWith() { // Get the ball's bounds, offset by the radius of the ball float minX = 0.0f + radius; float minY = 0.0f + radius; float maxX = 0.0f + BOX_WIDTH - 1.0f - radius; float maxY = 0.0f + BOX_HEIGHT - 1.0f - radius; double gravAmount = 0.9811111f; double gravDir = (90 / 57.2960285258); // Try moving one full step nextX = x + speedX; nextY = y + speedY; System.out.println("In serializedBall in collision."); // If collision detected. Reflect on the x or/and y axis // and place the ball at the point of impact. if (speedX != 0) { if (nextX > maxX) { // Check maximum-X bound collisionDetected = true; nextSpeedX = -speedX; // Reflect nextSpeedY = speedY; // Same nextX = maxX; nextY = (maxX - x) * speedY / speedX + y; // speedX non-zero } else if (nextX < minX) { // Check minimum-X bound collisionDetected = true; nextSpeedX = -speedX; // Reflect nextSpeedY = speedY; // Same nextX = minX; nextY = (minX - x) * speedY / speedX + y; // speedX non-zero } } // In case the ball runs over both the borders. if (speedY != 0) { if (nextY > maxY) { // Check maximum-Y bound collisionDetected = true; nextSpeedX = speedX; // Same nextSpeedY = -speedY; // Reflect nextY = maxY; nextX = (maxY - y) * speedX / speedY + x; // speedY non-zero } else if (nextY < minY) { // Check minimum-Y bound collisionDetected = true; nextSpeedX = speedX; // Same nextSpeedY = -speedY; // Reflect nextY = minY; nextX = (minY - y) * speedX / speedY + x; // speedY non-zero } } speedX += Math.cos(gravDir) * gravAmount; speedY += Math.sin(gravDir) * gravAmount; } public float getSpeed() { return (float) Math.sqrt(speedX * speedX + speedY * speedY); } public float getMoveAngle() { return (float) Math.toDegrees(Math.atan2(speedY, speedX)); } public float getRadius() { return radius; } public float getX() { return x; } public float getY() { return y; } public void setX(float f) { x = f; } public void setY(float f) { y = f; } } Here's how I'm drawing the balls: public class 3DMovingBodies extends Applet implements Runnable { private static final int BOX_WIDTH = 800; private static final int BOX_HEIGHT = 600; private int currentNumBalls = 1; // number currently active private volatile boolean playing; private long mFrameDelay; private JFrame frame; private int currentFrameRate; private Ball[] ball = new Ball[currentNumBalls]; private Random rand; private Sphere[] sphere = new Sphere[currentNumBalls]; private Transform3D[] trans = new Transform3D[currentNumBalls]; private TransformGroup[] objTrans = new TransformGroup[currentNumBalls]; public 3DMovingBodies() { rand = new Random(); float angleInDegree = rand.nextInt(360); setLayout(new BorderLayout()); GraphicsConfiguration config = SimpleUniverse .getPreferredConfiguration(); Canvas3D c = new Canvas3D(config); add("Center", c); ball[0] = new Ball(0.5f, 0.0f, 0.5f, 0.4f, angleInDegree, Color.yellow); // ball[1] = new Ball(1.0f, 0.0f, 0.25f, 0.8f, angleInDegree, // Color.yellow); // ball[2] = new Ball(0.0f, 1.0f, 0.15f, 0.11f, angleInDegree, // Color.yellow); trans[0] = new Transform3D(); // trans[1] = new Transform3D(); // trans[2] = new Transform3D(); sphere[0] = new Sphere(0.5f); // sphere[1] = new Sphere(0.25f); // sphere[2] = new Sphere(0.15f); // Create a simple scene and attach it to the virtual universe BranchGroup scene = createSceneGraph(); SimpleUniverse u = new SimpleUniverse(c); u.getViewingPlatform().setNominalViewingTransform(); u.addBranchGraph(scene); startSimulation(); } public BranchGroup createSceneGraph() { // Create the root of the branch graph BranchGroup objRoot = new BranchGroup(); for (int i = 0; i < currentNumBalls; i++) { // Create a simple shape leaf node, add it to the scene graph. objTrans[i] = new TransformGroup(); objTrans[i].setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); Transform3D pos1 = new Transform3D(); pos1.setTranslation(randomPos()); objTrans[i].setTransform(pos1); objTrans[i].addChild(sphere[i]); objRoot.addChild(objTrans[i]); } BoundingSphere bounds = new BoundingSphere(new Point3d(0.0, 0.0, 0.0), 100.0); Color3f light1Color = new Color3f(1.0f, 0.0f, 0.2f); Vector3f light1Direction = new Vector3f(4.0f, -7.0f, -12.0f); DirectionalLight light1 = new DirectionalLight(light1Color, light1Direction); light1.setInfluencingBounds(bounds); objRoot.addChild(light1); // Set up the ambient light Color3f ambientColor = new Color3f(1.0f, 1.0f, 1.0f); AmbientLight ambientLightNode = new AmbientLight(ambientColor); ambientLightNode.setInfluencingBounds(bounds); objRoot.addChild(ambientLightNode); return objRoot; } public void startSimulation() { playing = true; Thread t = new Thread(this); t.start(); } public void stop() { playing = false; } public void run() { long previousTime = System.currentTimeMillis(); long currentTime = previousTime; long elapsedTime; long totalElapsedTime = 0; int frameCount = 0; while (true) { currentTime = System.currentTimeMillis(); elapsedTime = (currentTime - previousTime); // elapsed time in // seconds totalElapsedTime += elapsedTime; if (totalElapsedTime > 1000) { currentFrameRate = frameCount; frameCount = 0; totalElapsedTime = 0; } for (int i = 0; i < currentNumBalls; i++) { ball[i].move(); ball[i].collideWith(); drawworld(); } try { Thread.sleep(88); } catch (Exception e) { e.printStackTrace(); } previousTime = currentTime; frameCount++; } } public void drawworld() { for (int i = 0; i < currentNumBalls; i++) { printTG(objTrans[i], "SteerTG"); trans[i].setTranslation(new Vector3f(ball[i].getX(), ball[i].getY(), 0.0f)); objTrans[i].setTransform(trans[i]); } } private Vector3f randomPos() /* * Return a random position vector. The numbers are hardwired to be within * the confines of the box. */ { Vector3f pos = new Vector3f(); pos.x = rand.nextFloat() * 5.0f - 2.5f; // -2.5 to 2.5 pos.y = rand.nextFloat() * 2.0f + 0.5f; // 0.5 to 2.5 pos.z = rand.nextFloat() * 5.0f - 2.5f; // -2.5 to 2.5 return pos; } // end of randomPos() public static void main(String[] args) { System.out.println("Program Started"); 3DMovingBodiesbb = new 3DMovingBodies(); bb.addKeyListener(bb); MainFrame mf = new MainFrame(bb, 600, 400); } }

    Read the article

  • Enum driving a Visual State change via the ViewModel

    - by Chris Skardon
    Exciting title eh? So, here’s the problem, I want to use my ViewModel to drive my Visual State, I’ve used the ‘DataStateBehavior’ before, but the trouble with it is that it only works for bool values, and the minute you jump to more than 2 Visual States, you’re kind of screwed. A quick search has shown up a couple of points of interest, first, the DataStateSwitchBehavior, which is part of the Expression Samples (on Codeplex), and also available via Pete Blois’ blog. The second interest is to use a DataTrigger with GoToStateAction (from the Silverlight forums). So, onwards… first let’s create a basic switch Visual State, so, a DataObj with one property: IsAce… public class DataObj : NotifyPropertyChanger { private bool _isAce; public bool IsAce { get { return _isAce; } set { _isAce = value; RaisePropertyChanged("IsAce"); } } } The ‘NotifyPropertyChanger’ is literally a base class with RaisePropertyChanged, implementing INotifyPropertyChanged. OK, so we then create a ViewModel: public class MainPageViewModel : NotifyPropertyChanger { private DataObj _dataObj; public MainPageViewModel() { DataObj = new DataObj {IsAce = true}; ChangeAcenessCommand = new RelayCommand(() => DataObj.IsAce = !DataObj.IsAce); } public ICommand ChangeAcenessCommand { get; private set; } public DataObj DataObj { get { return _dataObj; } set { _dataObj = value; RaisePropertyChanged("DataObj"); } } } Aaaand finally – hook it all up to the XAML, which is a very simple UI: A Rectangle, a TextBlock and a Button. The Button is hooked up to ChangeAcenessCommand, the TextBlock is bound to the ‘DataObj.IsAce’ property and the Rectangle has 2 visual states: IsAce and NotAce. To make the Rectangle change it’s visual state I’ve used a DataStateBehavior inside the Layout Root Grid: <i:Interaction.Behaviors> <ei:DataStateBehavior Binding="{Binding DataObj.IsAce}" Value="true" TrueState="IsAce" FalseState="NotAce"/> </i:Interaction.Behaviors> So now we have the button changing the ‘IsAce’ property and giving us the other visual state: Great! So – the next stage is to get that to work inside a DataTemplate… Which (thankfully) is easy money. All we do is add a ListBox to the View and an ObservableCollection to the ViewModel. Well – ok, a little bit more than that. Once we’ve got the ListBox with it’s ItemsSource property set, it’s time to add the DataTemplate itself. Again, this isn’t exactly taxing, and is purely going to be a Grid with a Textblock and a Rectangle (again, I’m nothing if not consistent). Though, to be a little jazzy I’ve swapped the rectangle to the other side (living the dream). So, all that’s left is to add some States to the template.. (Yes – you can do that), these can be the same names as the others, or indeed, something else, I have chosen to stick with the same names and take the extra confusion hit right on the nose. Once again, I add the DataStateBehavior to the root Grid element: <i:Interaction.Behaviors> <ei:DataStateBehavior Binding="{Binding IsAce}" Value="true" TrueState="IsAce" FalseState="NotAce"/> </i:Interaction.Behaviors> The key difference here is the ‘Binding’ attribute, where I’m now binding to the IsAce property directly, and boom! It’s all gravy!   So far, so good. We can use boolean values to change the visual states, and (crucially) it works in a DataTemplate, bingo! Now. Onwards to the Enum part of this (finally!). Obviously we can’t use the DataStateBehavior, it' only gives us true/false options. So, let’s give the GoToStateAction a go. Now, I warn you, things get a bit complex from here, instead of a bool with 2 values, I’m gonna max it out and bring in an Enum with 3 (count ‘em) 3 values: Red, Amber and Green (those of you with exceptionally sharp minds will be reminded of traffic lights). We’re gonna have a rectangle which also has 3 visual states – cunningly called ‘Red’, ‘Amber’ and ‘Green’. A new class called DataObj2: public class DataObj2 : NotifyPropertyChanger { private Status _statusValue; public DataObj2(Status status) { StatusValue = status; } public Status StatusValue { get { return _statusValue; } set { _statusValue = value; RaisePropertyChanged("StatusValue"); } } } Where ‘Status’ is my enum. Good times are here! Ok, so let’s get to the beefy stuff. So, we’ll start off in the same manner as the last time, we will have a single DataObj2 instance available to the Page and bind to that. Let’s add some Triggers (these are in the LayoutRoot again). <i:Interaction.Triggers> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Amber"> <ei:GoToStateAction StateName="Amber" UseTransitions="False" /> </ei:DataTrigger> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Green"> <ei:GoToStateAction StateName="Green" UseTransitions="False" /> </ei:DataTrigger> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Red"> <ei:GoToStateAction StateName="Red" UseTransitions="False" /> </ei:DataTrigger> </i:Interaction.Triggers> So what we’re saying here is that when the DataObject2.StatusValue is equal to ‘Red’ then we’ll go to the ‘Red’ state. Same deal for Green and Amber (but you knew that already). Hook it all up and start teh project. Hmm. Just grey. Not what I wanted. Ok, let’s add a ‘ChangeStatusCommand’, hook that up to a button and give it a whirl: Right, so the DataTrigger isn’t picking up the data on load. On the plus side, changing the status is making the visual states change. So. We’ll cross the ‘Grey’ hurdle in a bit, what about doing the same in the DataTemplate? <Codey Codey/> Grey again, but if we press the button: (I should mention, pressing the button sets the StatusValue property on the DataObj2 being represented to the next colour). Right. Let’s look at this ‘Grey’ issue. First ‘fix’ (and I use the term ‘fix’ in a very loose way): The Dispatcher Fix This involves using the Dispatcher on the View to call something like ‘RefreshProperties’ on the ViewModel, which will in turn raise all the appropriate ‘PropertyChanged’ events on the data objects being represented. So, here goes, into turdcode-ville – population – me: First, add the ‘RefreshProperties’ method to the DataObj2: internal void RefreshProperties() { RaisePropertyChanged("StatusValue"); } (shudder) Now, add it to the hosting ViewModel: public void RefreshProperties() { DataObject2.RefreshProperties(); if (DataObjects != null && DataObjects.Count > 0) { foreach (DataObj2 dataObject in DataObjects) dataObject.RefreshProperties(); } } (double shudder) and now for the cream on the cake, adding the following line to the code behind of the View: Dispatcher.BeginInvoke(() => ((MoreVisualStatesViewModel)DataContext).RefreshProperties()); So, what does this *ahem* code give us: Awesome, it makes the single bound data object show the colour, but frankly ignores the DataTemplate items. This (by the way) is the same output you get from: Dispatcher.BeginInvoke(() => ((MoreVisualStatesViewModel)DataContext).ChangeStatusCommand.Execute(null)); So… Where does that leave me? What about adding a button to the Page to refresh the properties – maybe it’s a timer thing? Yes, that works. Right, what about using the Loaded event then eh? Loaded += (s, e) => ((MoreVisualStatesViewModel) DataContext).RefreshProperties(); Ahhh No. What about converting the DataTemplate into a UserControl? Anything is worth a shot.. Though – I still suspect I’m going to have to ‘RefreshProperties’ if I want the rectangles to update. Still. No. This DataTemplate DataTrigger binding is becoming a bit of a pain… I can’t add a ‘refresh’ button to the actual code base, it’s not exactly user friendly. I’m going to end this one now, and put some investigating into the use of the DataStateSwitchBehavior (all the ones I’ve found, well, all 2 of them are working in SL3, but not 4…)

    Read the article

  • JDBC Connection Pools in Glassfish

    - by Dana Singleterry
    I've been attempting to configure Glassfish 3.1.2.2 for ADF 11g and the need arose to create a jdbc connection pool to my Oracle XE 11g database. While this is really very trivial there were no samples of how to do this and documentation, while good, rarely ever provides concrete examples. After fumbling around for a few minutes searching for an example I gave up and figured it out on my own. Here are the steps for any of you that may be in need. This can be done either via the Glassfish command line tool asadmin or through the admin console. I'm doing this through the admin console. Start Glassfish and connect to the admin console with the credentials you defined at installation: http://localhost:4848 Navigate to Resources | JDBC | JDBC Connection Pools and select New. Be sure to enter Resource Type & Datasource Classname under General Settings tab. You can go with the defaults for Pool Settings etc... View Image Go to the Additional Properties tab and create username, password, and url properties with the respective values. View Image Navigate to Resources | JDBC | JDBC Resources and select New. Be sure to enter the JNDI Name and select the Pool Name for the jdbc connection pool you created previously. View Image Navigate to Configurations | server-config | JVM Settings and select the JVM Options tab. Add the values highlighted: -Doracle.jdbc.J2EE13Compliant=true is used to make sure the driver behaves in a JEE-compliant manner. View Image To integrate the JDBC driver into a GlassFish Server domain, copy the JAR files into the domain-dir/lib directory, then restart the server. The JAR file for the Oracle 11 database driver is ojdbc6dms.jar. An upcoming entry will demonstrate configuring Glassfish for Oracle ADF Applications.

    Read the article

  • LINQ: Single vs. SingleOrDefault

    - by Paulo Morgado
    Like all other LINQ API methods that extract a scalar value from a sequence, Single has a companion SingleOrDefault. The documentation of SingleOrDefault states that it returns a single, specific element of a sequence of values, or a default value if no such element is found, although, in my opinion, it should state that it returns a single, specific element of a sequence of values, or a default value if no such element is found. Nevertheless, what this method does is return the default value of the source type if the sequence is empty or, like Single, throws an exception if the sequence has more than one element. I received several comments to my last post saying that SingleOrDefault could be used to avoid an exception. Well, it only “solves” half of the “problem”. If the sequence has more than one element, an exception will be thrown anyway. In the end, it all comes down to semantics and intent. If it is expected that the sequence may have none or one element, than SingleOrDefault should be used. If it’s not expect that the sequence is empty and the sequence is empty, than it’s an exceptional situation and an exception should be thrown right there. And, in that case, why not use Single instead? In my opinion, when a failure occurs, it’s best to fail fast and early than slow and late. Other methods in the LINQ API that use the same companion pattern are: ElementAt/ElementAtOrDefault, First/FirstOrDefault and Last/LastOrDefault.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to update grub with puppet?

    - by Tombart
    I would like to change a line in /etc/default/grub with puppet to this: GRUB_CMDLINE_LINUX="cgroup_enable=memory" I've tried to used augeas which seems to do this magic: exec { "update_grub": command => "update-grub", refreshonly => true, } augeas { "grub-serial": context => "/files/etc/default/grub", changes => [ "set /files/etc/default/grub/GRUB_CMDLINE_LINUX[last()] cgroup_enable=memory", ], notify => Exec['update_grub'], } It seems to work, but the result string is not in quotes and also I want to make sure that any other values will be separated by space. GRUB_CMDLINE_LINUX=cgroup_enable=memory Is there some mechanism how to append values and escape the whole thing? GRUB_CMDLINE_LINUX="quiet splash cgroup_enable=memory"

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >