Search Results

Search found 22224 results on 889 pages for 'point of sale'.

Page 603/889 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • Strategy for clients to retrieve real-time log from HTTP server

    - by Jerry Dodge
    I have an HTTP Server Service application which has its own logging mechanism. It's written in Delphi. I would like to provide a way for multiple clients to connect to this service and get a real-time update of the log. The log in the service moves rather fast, there's a lot of things to log. There may be up to 50 messages within 1 second at times. The existing log which is already implemented is not saved, it's only kept in the memory of the server service - where I will need to distribute it to any client which needs it. Once all clients have a log message, it should be deleted. I intend to use HTTP to "ask" the server for the log, and respond with an XML packet. The connections are not keep-alive. The only problem is, the server should only send the client those log records which it needs, not everything. I have no way of the server pushing the log to the clients in real-time, so each client needs to repeatedly ask the server for the latest log records. This HTTP Server is very lightweight, and there is no session management. There isn't even any type of authentication. The only way I see is for a client to register its self on the server, and whenever a log is issued on the server, it creates a copy of the log for each client, where each client has a log queue (string list). However, suppose there are 100 clients connected and expecting to receive this log. That means the server must create 100 copies of each log, add this log to the end of each client log queue, and wait for the client to request it. At that point, when the server replies with the XML log, it should flush (delete) whatever's in the queue. I'm worried however that this could cause memory issues. Each client log queue might get 100 log messages before the client requests the latest logs. How should I go about doing this in the fastest way possible without hindering the performance of the server? I'm trying to avoid having to create a copy of each log for each client.

    Read the article

  • Partner Blog Series: PwC Perspectives - The Gotchas, The Do's and Don'ts for IDM Implementations

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Verdana","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Arial Narrow","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} It is generally accepted among business communities that technology by itself is not a silver bullet to all problems, but when it is combined with leading practices, strategy, careful planning and execution, it can create a recipe for success. This post attempts to highlight some of the best practices along with dos & don’ts that our practice has accumulated over the years in the identity & access management space in general, and also in the context of R2, in particular. Best Practices The following section illustrates the leading practices in “How” to plan, implement and sustain a successful OIM deployment, based on our collective experience. Planning is critical, but often overlooked A common approach to planning an IAM program that we identify with our clients is the three step process involving a current state assessment, a future state roadmap and an executable strategy to get there. It is extremely beneficial for clients to assess their current IAM state, perform gap analysis, document the recommended controls to address the gaps, align future state roadmap to business initiatives and get buy in from all stakeholders involved to improve the chances of success. When designing an enterprise-wide solution, the scalability of the technology must accommodate the future growth of the enterprise and the projected identity transactions over several years. Aligning the implementation schedule of OIM to related information technology projects increases the chances of success. As a baseline, it is recommended to match hardware specifications to the sizing guide for R2 published by Oracle. Adherence to this will help ensure that the hardware used to support OIM will not become a bottleneck as the adoption of new services increases. If your Organization has numerous connected applications that rely on reconciliation to synchronize the access data into OIM, consider hosting dedicated instances to handle reconciliation. Finally, ensure the use of clustered environment for development and have at least three total environments to help facilitate a controlled migration to production. If your Organization is planning to implement role based access control, we recommend performing a role mining exercise and consolidate your enterprise roles to keep them manageable. In addition, many Organizations have multiple approval flows to control access to critical roles, applications and entitlements. If your Organization falls into this category, we highly recommend that you limit the number of approval workflows to a small set. Most Organizations have operations managed across data centers with backend database synchronization, if your Organization falls into this category, ensure that the overall latency between the datacenters when replicating the databases is less than ten milliseconds to ensure that there are no front office performance impacts. Ingredients for a successful implementation During the development phase of your project, there are a number of guidelines that can be followed to help increase the chances for success. Most implementations cannot be completed without the use of customizations. If your implementation requires this, it’s a good practice to perform code reviews to help ensure quality and reduce code bottlenecks related to performance. We have observed at our clients that the development process works best when team members adhere to coding leading practices. Plan for time to correct coding defects and ensure developers are empowered to report their own bugs for maximum transparency. Many organizations struggle with defining a consistent approach to managing logs. This is particularly important due to the amount of information that can be logged by OIM. We recommend Oracle Diagnostics Logging (ODL) as an alternative to be used for logging. ODL allows log files to be formatted in XML for easy parsing and does not require a server restart when the log levels are changed during troubleshooting. Testing is a vital part of any large project, and an OIM R2 implementation is no exception. We suggest that at least one lower environment should use production-like data and connectors. Configurations should match as closely as possible. For example, use secure channels between OIM and target platforms in pre-production environments to test the configurations, the migration processes of certificates, and the additional overhead that encryption could impose. Finally, we ask our clients to perform database backups regularly and before any major change event, such as a patch or migration between environments. In the lowest environments, we recommend to have at least a weekly backup in order to prevent significant loss of time and effort. Similarly, if your organization is using virtual machines for one or more of the environments, it is recommended to take frequent snapshots so that rollbacks can occur in the event of improper configuration. Operate & sustain the solution to derive maximum benefits When migrating OIM R2 to production, it is important to perform certain activities that will help achieve a smoother transition. At our clients, we have seen that splitting the OIM tables into their own tablespaces by categories (physical tables, indexes, etc.) can help manage database growth effectively. If we notice that a client hasn’t enabled the Oracle-recommended indexing in the applicable database, we strongly suggest doing so to improve performance. Additionally, we work with our clients to make sure that the audit level is set to fit the organization’s auditing needs and sometimes even allocate UPA tables and indexes into their own table-space for better maintenance. Finally, many of our clients have set up schedules for reconciliation tables to be archived at regular intervals in order to keep the size of the database(s) reasonable and result in optimal database performance. For our clients that anticipate availability issues with target applications, we strongly encourage the use of the offline provisioning capabilities of OIM R2. This reduces the provisioning process for a given target application dependency on target availability and help avoid broken workflows. To account for this and other abnormalities, we also advocate that OIM’s monitoring controls be configured to alert administrators on any abnormal situations. Within OIM R2, we have begun advising our clients to utilize the ‘profile’ feature to encapsulate multiple commonly requested accounts, roles, and/or entitlements into a single item. By setting up a number of profiles that can be searched for and used, users will spend less time performing the same exact steps for common tasks. We advise our clients to follow the Oracle recommended guides for database and application server tuning which provides a good baseline configuration. It offers guidance on database connection pools, connection timeouts, user interface threads and proper handling of adapters/plug-ins. All of these can be important configurations that will allow faster provisioning and web page response times. Many of our clients have begun to recognize the value of data mining and a remediation process during the initial phases of an implementation (to help ensure high quality data gets loaded) and beyond (to support ongoing maintenance and business-as-usual processes). A successful program always begins with identifying the data elements and assigning a classification level based on criticality, risk, and availability. It should finish by following through with a remediation process. Dos & Don’ts Here are the most common dos and don'ts that we socialize with our clients, derived from our experience implementing the solution. Dos Don’ts Scope the project into phases with realistic goals. Look for quick wins to show success and value to the stake holders. Avoid “boiling the ocean” and trying to integrate all enterprise applications in the first phase. Establish an enterprise ID (universal unique ID across the enterprise) earlier in the program. Avoid major UI customizations that require code changes. Have a plan in place to patch during the project, which helps alleviate any major issues or roadblocks (product and database). Avoid publishing all the target entitlements if you don't anticipate their usage during access request. Assess your current state and prepare a roadmap to address your operations, tactical and strategic goals, align it with your business priorities. Avoid integrating non-production environments with your production target systems. Defer complex integrations to the later phases and take advantage of lessons learned from previous phases Avoid creating multiple accounts for the same user on the same system, if there is an opportunity to do so. Have an identity and access data quality initiative built into your plan to identify and remediate data related issues early on. Avoid creating complex approval workflows that would negative impact productivity and SLAs. Identify the owner of the identity systems with fair IdM knowledge and empower them with authority to make product related decisions. This will help ensure overcome any design hurdles. Avoid creating complex designs that are not sustainable long term and would need major overhaul during upgrades. Shadow your internal or external consulting resources during the implementation to build the necessary product skills needed to operate and sustain the solution. Avoid treating IAM as a point solution and have appropriate level of communication and training plan for the IT and business users alike. Conclusion In our experience, Identity programs will struggle with scope, proper resourcing, and more. We suggest that companies consider the suggestions discussed in this post and leverage them to help enable their identity and access program. This concludes PwC blog series on R2 for the month and we sincerely hope that the information we have shared thus far has been beneficial. For more information or if you have questions, you can reach out to Rex Thexton, Senior Managing Director, PwC and or Dharma Padala, Director, PwC. We look forward to hearing from you. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL).

    Read the article

  • C++ Accelerated Massive Parallelism

    - by Daniel Moth
    At AMD's Fusion conference Herb Sutter announced in his keynote session a technology that our team has been working on that we call C++ Accelerated Massive Parallelism (C++ AMP) and during the keynote I showed a brief demo of an app built with our technology. After the keynote, I go deeper into the technology in my breakout session. If you read both those abstracts, you'll get some information about what C++ AMP is, without being too explicit since we published the abstracts before the technology was announced. You can find the official online announcement at Soma's blog post. Here, I just wanted to capture the key points about C++ AMP that can serve as an introduction and an FAQ. So, in no particular order… C++ AMP lowers the barrier to entry for heterogeneous hardware programmability and brings performance to the mainstream, without sacrificing developer productivity or solution portability. is designed not only to help you address today's massively parallel hardware (i.e. GPUs and APUs), but it also future proofs your code investments with a forward looking design. is part of Visual C++. You don't need to use a different compiler or learn different syntax. is modern C++. Not C or some other derivative. is integrated and supported fully in Visual Studio vNext. Editing, building, debugging, profiling and all the other goodness of Visual Studio work well with C++ AMP. provides an STL-like library as part of the existing concurrency namespace and delivered in the new amp.h header file. makes it extremely easy to work with large multi-dimensional data on heterogeneous hardware; in a manner that exposes parallelization. introduces only one core C++ language extension. builds on DirectX (and DirectCompute in particular) which offers a great hardware abstraction layer that is ubiquitous and reliable. The architecture is such, that this point can be thought of as an implementation detail that does not surface to the API layer. Stay tuned on my blog for more over the coming months where I will switch from just talking about C++ AMP to showing you how to use the API with code examples… Comments about this post welcome at the original blog.

    Read the article

  • changing drive nodes & hdparm

    - by Kalamalka Kid
    I am currently attempting to create a command that works at startup to kill the power on two of my very noisy hard drives. I have edited the etc/rc.local file to include this command: sudo hdparm -y /dev/sdc sudo hdparm -y /dev/sdd exit 0 While I think this should work, it seems the allocated drives keep switching around every time I reboot. I have sda, sdb, sdc, sdd, and sde but they keep getting jumbled around (making the drive I wish to shut different than sdd which is making the task of shutting down the right drive on start-up quite cumbersome. I had a perfectly functioning ftstab file working which disappeard, but I restored it from the back up into the etc/ dir: # <file system> <mount point> <type> <options> <dump> <pass> #Entry for /dev/sda1 : UUID=43c09daf-08a5-44f2-89b0-fc7c6f0d1e67 / ext4 errors=remount-ro 0 1 #Entry for /dev/sdd1 : UUID=443AFBAD7FE50945 /media/DX100 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=FCE456F5E456B21E /media/GalaxyM83 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdf1 : UUID=1CA057FDA057DBB8 /media/Holideck ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdc1 : UUID=7ABB49654B799D40 /media/JX3P ntfs defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 it seems every time I boot the order of the drives changes. I do not know how to resolve this. A quick workaround the problem was to go with UUID instead of the DEV letter by editing the etc/rc.local file to include: hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 So I thought I was in the clear, as I heard both hard drives die down during the boot sequence, BUT, as soon as I log in both drives start up again! so now I have to figure out what is making them start up again after log in, or perhaps another way to get them to turn off. Is there some kind of command i can get to execute after log in? I tried editing the startup applications to include an autossh with: autoshh - sudo hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 autoshh - sudo hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 but this did not seem to work to turn off the disks after log in.

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • Why CoffeeScript is tough to maintain

    - by Renso
    I recently started trying out CoffeeScript only to find out that it caused more headaches. The abstraction level of jQuery was perfect, it did not dictate to coders how to design their code, it just works. However, I recently posted a request to the CoffeeScript team to consider introducing curly braces to help with more complex code to control the flow of logic. For example a if-then-else with many nested levels can be near impossible to debug without tracing through it when using CoffeeScript. Also with IDEs like Visual Studio, regular JavaScript intellicense and auto-formatting make it easy to appropriate indent nested levels without any work on the part of the developer and reading it is not that hard, especially with some extensions that show vertical lines in the code editor to help see what is nested within what part of the code.However with CoffeeScript that is not the case. The samples given in the CoffeeScript web site are of course just simple examples to explain the features and one gets excited pretty quick over the powerful shortcuts. I tried to convert a piece of JavaScript over to CoffeeScript and gave up since you need to first of all remove ALL non CoffeeScript coding constructs for it to even compile. However js2coffee can help with that. However to keep track of nested levels became something that was simply not manageable using CoffeeScript.Furthermore, any coding language that controls the flow of logic by indentation is extremely dangerous for obvious reasons. I liked CoffeeScript a lot, but the fact that the logical flow of the code is controlled by how much you indent code, spaces or tabs, is not reliable as there is no way the programmer has an easy way of knowing what parts of the code will get hit when the code spans a page.When I suggested introducing curly braces in CoffeeScript the team, one contributor advised me that my code needs to be re-designed! Needless to say that is absurd. When I included a piece of the code he asked my if it was legacy code. It's like saying to a Java programmer, sorry you cannot use Java because we don't agree with how you write your code.jashkenas from the CoffeeScript blog gave some great suggestions and made the point that introducing curly braces would be very problematic for them as they use them to denote objects. Makes sense, but I would still love to see some way to replace code flow control with spaces and indentation to something more concrete and human readable.

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

  • PowerShell – Show a Notification Balloon

    - by BuckWoody
    In my presentations for PowerShell I sometimes want to start a process (like a backup) that will take some time. I normally pop up a notification “balloon” at the start, and then do the bulk of the work, and then pop up a balloon at the end to let me know it’s done. You can actually try out this little sample (on a test system, of course) without any other code to see what it does. Then just put the other PowerShell commands in the #Do Some Work part. Oh – throw an icon (.ico file) in a c:\temp directory or point that somewhere else. (No, this probably isn’t original. Can’t remember where I saw the original code, but I’ve modified it a bit anyway, so if you’re the original author and this looks slightly familiar, post a comment.) [void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") $objBalloon = New-Object System.Windows.Forms.NotifyIcon $objBalloon.Icon = "C:\temp\Folder.ico" # You can use the value Info, Warning, Error $objBalloon.BalloonTipIcon = "Info" # Put what you want to say here for the Start of the process $objBalloon.BalloonTipTitle = "Begin Title" $objBalloon.BalloonTipText = "Begin Message" $objBalloon.Visible = $True $objBalloon.ShowBalloonTip(10000) # Do some work # Put what you want to say here for the completion of the process $objBalloon.BalloonTipTitle = "End Title" $objBalloon.BalloonTipText = "End Message" $objBalloon.Visible = $True $objBalloon.ShowBalloonTip(10000) Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to access files on a drive from an older system, mounted in a new system?

    - by David Thomas
    I've recently built a new system, after a rather large physical injury was sustained by my previous system (a precarious balance, and gravity, were not a happy mix). Surprisingly the /home drive of that system appears to have more-or-less survived the trauma. However... I decided to use a fresh drive for / (and swap) partition(s), and another fresh drive for the new /home. Now that's working, I decided to install the old /home drive (that I had assumed until now would be entirely dead and without capacity for use) into the new system to recover the files and data (so far as is possible). At this point I've run into a snag: I have no idea how to go about this (with Windows it was relatively easy, the new drive would be the latest character of the alphabet, and go from there). With 'disk utility' (System - Administration - Disk Utitlity) I've worked out which drive it is (/dev/sda) but clicking on 'mount' produces an error: 1: helper failed with: mount: according to mtab, /dev/sdb1 is already mounted on / mount failed ...if it is mounted on / I can't see it. I'm also moderately confused by the disk (device /dev/sda) being referred to as /dev/sdb1. Any and all insights would be incredibly welcome (I've already voted for: Idea #9063: New internal hard drives default automount at Brainstorm). Edited in response to Roland's request for a screenshot of disk utility: Details (so far as I know them): 40GB disk is / and swap, 1.0 TB Samsung is /home 1.0 TB Hitachi is from the old system (and was the old /home drive). Output from sudo fdisk -l pasted below: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bef00 Device Boot Start End Blocks Id System /dev/sda1 1 121601 976760001 83 Linux Disk /dev/sdb: 40.0 GB, 40018599936 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00037652 Device Boot Start End Blocks Id System /dev/sdb1 * 1 4742 38084608 83 Linux /dev/sdb2 4742 4866 993281 5 Extended /dev/sdb5 4742 4866 993280 82 Linux swap / Solaris Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e8d46 Device Boot Start End Blocks Id System /dev/sdc1 1 121602 976760832 83 Linux

    Read the article

  • Mise à jour Partner Enablement Oracle University (novembre)

    - by swalker
    Executive overview of Oracle Fusion Applications in 1-day from your desktop Designed from the ground up using the latest technology advances and incorporating the best practices gathered from Oracle's thousands of customers, Oracle Fusion Applications are 100% open-standards-based business applications that set a new standard for the way we innovate, work, and adopt technology. Learn more about them: Oracle University has scheduled a 1–day executive overview as a Live Virtual Class on the following dates: 1 December 2 December Your OPN discount applies to the standard price shown on the website. New In Class and Online dates will be shared on education.oracle.com. Book online or contact your local Oracle University representative for scheduling requests and more information. Deux nouvelles formations intensives OPN Only Boot Camps Les formations OPN Only Boot Camps suivantes viennent d'être mises à disposition : Formation technique intensive de 3 jours Oracle Exadata 11g  : Vous prépare à devenir un Spécialiste certifié de l’implémentation Oracle Exadata 11g Actuellement prévue en Allemagne, au Royaume-Uni Possibilité d'organisation dans tous les pays Dates des classes virtuelles en direct : 15-17 fév. 2012 & 16-18 mai 2012 Formation intensive de 5 jours Oracle BI Enterprise Edition 11g Implementation Actuellement prévue en Suède Possibilité d'organisation dans tous les pays Consulter le calendrier complet des formations OPN Only Boot Camp. Nouveautés du côté des certifications : Java SE 7 Soyez parmi les premiers à obtenir la certification Java SE 7 . Les examens suivants sont depuis peu disponibles en bêta test : Code et intitulé de l'examen Filière de certification 1Z1-805 Upgrade to Java SE 7 Programmer (Bêta jusqu'au 17 déc. 2011) Professionnel certifié Oracle (Certified Professional), Programmeur Java SE 7 1Z1-803 Java SE 7 Programmer I (Bêta jusqu'au 17 déc. 2011) Associé certifié Oracle (Certified Associate), Programmeur Java SE 7 Un examen bêta vous confère deux avantages distincts : vous serez parmi les premiers à obtenir la certification, vous bénéficiez d'un tarif réduit. Les examens bêta peuvent être passés dans n'importe quel Centre de test Pearson VUE. Nouveaux cours Parmi les nouveautés d’Oracle Université de ce mois-ci, vous trouverez : Nouveaux cours - Cliquez ici pour en savoir plus. Vos partenaires souhaitent-ils obtenir le point de vue des experts de l'Oracle University ? Conseillez-leur de consulter les newsletters suivantes de l'Oracle University " href="http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=289&p_nl=tech" target="_blank">Newsletters Technologie Newsletters Applications Restez connecté à Oracle University : OracleMix Twitter LinkedIn Facebook

    Read the article

  • RIM's current BB7 developer toolset is a joke

    - by mbrit
    tl;dr - RIM's current developer toolset is not fit for purpose.Background to this is that I'm currently working on a PhoneGap/Cordova project for a client that has to run on BlackBerry. The tooling is so ridiculous to use that even though I had a gentle dig at them in a Guardian piece it's worth having a more full-on attack.At the moment, RIM's pitch is that apps are built for the current BBOS7 devices using WebWorks. This is an HTML-based toolset. Essentially a browser is spun up in a native app container and your app is powered by JavaScript. Specific JavaScript libraries exist that thunk down to native capabilities no the device. I happen to use PhoneCap/Cordova in combination with this.The tooling is non-existent. I'm using TextMate, Ant, and Terminal to develop the app. There's no "console.log" output, and no debugging. The only way to instrument the app is to put "alert" calls in your code.Apart from the fact that that's *not* fine in 2012, how about this… every time you deploy a new app to the device, the device has to reboot. This process takes six minutes on a relatively modern BlackBerry device. How about this as well - in order to get a file into the package it has to be signed. My small app over here has 100 different files (75 or so generated). Signing doesn't happen locally, it happens on RIM's servers in Waterloo. Thus whenever you deploy the app you have this utility have to call RIM's servers 100 times. More to the point, sometimes during the day these servers have "micro-downtime" moments where they're unreachable for five or ten minutes, normally two or three times a day. Oh yes, you'll also get an email sent to you per signing on success or failure. 100 inbound emails, per deployment.(I started this post at the beginning of one of these cycles, by the way. That's how long it takes to build and deploy *once*. By the way, the change I made didn't work.)To clarify:* Change the script,* Build it using Ant,* Ant will spin up a Java app that talks to RIM's servers to sign it.* Receive 100 emails, assuming the server is up.* App deployed - takes about 30 seconds.* BlackBerry device restarts - takes about six minutes.* Find and open the app. Go through security prompts.* Test the app, with no "console.log" output and no debugger."Why not use the simulator?" I hear you ask. Well, apart from the fact that the simulator refused to reach any network service over HTTPS that I happen to own? (Some people suggest changing DNS settings for this known issue.) Admittedly, the simulator does show you console.log, but you still have the "six minute" restart issue on the simulator.Developers will understand this problem. Breaking concentration for six-plus minutes every time you want to deploy an app turns developing into a nightmare. Combining that with no worthy debugging tools turns the toolset into a joke.

    Read the article

  • Webcast Replay : SANS Institute Product Review of Oracle Identity Manager

    - by B Shashikumar
    Thanks to everyone who attended the SANS Institute webinar covering the product review of Oracle Identity Manager. And a special thanks to our guest speakers from SuperValu - Phillip Black and Patrick Abreo. If you missed the webcast, you can catch a replay here  And here are the slides that were used in the webcast.  There were many questions that we could not answer as we ran out of time. We have captured some of the questions with responses below. Is Oracle Identity Analytics still offered as a separate product or is it part of Oracle Identity Manager? Oracle Identity Manager and Oracle Identity Analytics are now offered as part of Oracle Identity Governance Suite. OIA and OIM share a common UI architecture, common data model and common support for connected and disconnected resources.  When requesting new access/entitlements is there an approval process? Yes. We leverage SOA BPEL-based workflows for approvals  Are the identity self service capabilities based on Oracle ADF? Yes they are completely based on Oracle ADF  Can you give some examples of personalization and customization with Oracle Identity Manager 11gR2? With the new UI config framework we can enable different levels of UI customization. Customers now have the ability to Point & click to customize; or drag and drop customization without any need for coding. So users can easily personalize the interface of their application within the browser. For example, they can change the logo, Rearrange, hide Home Page regions; regularly searched items can be saved and re-used; Searchable & search results columns can be configured; Sorting preferences are remembered and so on. For more sophisticated customization, Customers can also edit the standard JSF within the page to alter business rules, modify page flows, page layouts and other items. Can you explain the role of sandboxes in customization? Customers can make their custom changes within a sandbox so that it doesn’t impact their production environment. They can make their changes, validate those changes, stage and then commit those changes without affecting production users. This is similar to how source code control systems like perforce work To watch a replay of the webcast, click here

    Read the article

  • I'd like to switch from 32-bit to 64-bit within same version

    - by Marty Fried
    I have a 32-bit installation of 11.10 on my 64-bit (4 GB) home AMD system. I have recently read up a bit on 64-bit version, and it seems that it would be a marginally better choice now for me. I have read about several methods to help reinstall all the various apps, using either dpkg's get-selections/set-selections and dselect in various ways, or using synaptic's save/get markings. The problem here is that I've read several variations, and I'm not sure which is best. I have enough disk space to do this with a brand new partition, so I'm not too worried about destroying anything, but I don't really want to make it my life's work, hence my appeal for expert tips. Since it's the same version, would it be safe to copy configuration files from the 32-bit system? I'd guess my home directory and /etc might be enough, and would save at least most of the time to reconfigure. But are there difference in configuration files in either of these directories for 32 vs 64 bits that might cause problems? After reinstalling to 64-bit, I can then continue along the 64 bit path for upgrades, but I thought it would be easier to switch the same version, than to try to reinstall apps and upgrade at the same time. Some methods I've seen suggested, among others: A. From Ubuntu forums On your old system (assuming it is still working), start up Synaptic and go: File->Save Markings and choose a file name along with a location (like a USB drive) that you can use when you have installed your new system). You need to check on the bottom: "Save full state, not only changes" This file contains a list of all your currently installed packages, and when you have installed and booted up your new system (and configured your repositories to the best for your location - as we all do, don't we?) then start up Synaptic and go: File-Read Markings and point it at your saved file, and after that has completed then select Apply to kick off the download & installation of all of those packages you had installed previously! B. From the same discussion: According to section 6.4.9 of the Debian Reference Manual, the following will save both the list of packages installed and their debconf configuration: # dpkg --get-selections "*" >myselections # or use \* # debconf-get-selections > debconfsel.txt and the following will reinstall and reconfigure them: # dselect update # debconf-set-selections < debconfsel.txt # dpkg --set-selections <myselections # apt-get -u dselect-upgrade # or dselect install C. A variation on the above I've seen a lot, this from stackoverflow: dpkg --get-selections > package_list then on the new install: cat package_list | sudo dpkg --set-selections && sudo apt-get dselect-upgrade I don't really understand B, or why it's slightly different than many others.

    Read the article

  • Will HTML5 make Silverlight redundant?

    - by Laila
    One of the great features of Adobe AIR v2 that was launched this month was its support for some of the 2008 draft of HTML5. The HTML5 specification was started in 2004, but the full spec will probably not be approved by W3C until around 2022. One might have thought that it would take years yet from now to reach the point where any browsers were remotely HTML5-compliant, but enough of HTML5 is published and agreed to make a lot of it possible, and Safari and Adobe have got there thanks to Apple's open-source WebKit. The race for HTML 5 has been fuelled by the demand by Apple and Google for advanced graphics, typography, animations and transitions without having to rely on third party browser plug-ins such as Adobe Flash or Silverlight. There is good reason for this haste: Flash doesn't support touch-devices and has been slow in supporting hardware video decoders such as H.264. There is a strong requirement to do all that Flash can do in an open-standards way. Those with proprietary solutions remain sniffy. In AIR 2, Adobe pointedly disables the HTML5 and tags that allow basic playing of media content, saying that the specification is not final and there is still no standard for the supported formats, and adding that Safari implements a 'disjoint set' of codecs. Microsoft also has little interest in HTML 5 as it has so much invested in Silverlight. Google stands to gain by the Adobe AIR for Android as it will allow a lot of applications to be migrated easily to the platform, so sees Apple's war on Flash as a way of gaining market share. Why do we care? It is because HTML5/CSS3 provides facilities much far beyond HTML4, bring the reality of browser-based applications a lot closer. Probably most generally useful is the advanced typography: Safari and AIR already both support a way of reflowing text in a container across an arbitrary number of columns; Page-specific fonts can also be specified. Then there is 2D drawing, video, transitions, local storage, AJAX navigation and mutable DOM prototypes. HTML5 is likely to provide base functionality that is required but it is too early to be certain that it will render Flash, Silverlight or JavaFX obsolete. In the meantime, Adobe Air provides the best vehicle for developing HTML5/CSS3 applications without a twinge of worry about browser incompatibilities. Cheers, Laila

    Read the article

  • Automate RAC Cluster Upgrades using EM12c

    - by HariSrinivasan
    One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases. This automation includes: Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB)  Best practices in tune with your operations, where you can automate upgrade in steps: Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs). Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster). Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required. Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to : a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN) b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup c. Ignore backup completely, if there are external mechanisms already in place.  High Level Steps: Select the Procedure "Upgrade Database" from Database Provisioning Home page. Choose the Target Type for upgrade and the Destination version Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already Select the Gold Image of the destination version for deploying both the GI and RAC OHs Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts Set the Break points in the procedure execution to isolate Downtime activities Submit and track the procedure's execution status.  The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link. Explore the functionality!! In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

    Read the article

  • The Case of the Missing Date/Time Stamp: Reporting Services 2008 R2 Snapshots

    - by smisner
    This week I stumbled upon an undocumented “feature” in SQL Server 2008 R2 Reporting Services as I was preparing a demonstration on how to set up and use report snapshots. If you’re familiar with the main changes in this latest release of Reporting Services, you probably already know that Report Manager got a facelift this time around. Although this facelift was generally a good thing, one of the casualties – in my opinion – is the loss of the snapshot label that served two purposes… First, it flagged the report as a snapshot. Second, it let you know when that snapshot was created. As part of my standard operating procedure when demonstrating report snapshots, I point out this label, so I was rather taken aback when I didn’t see it in the demonstration I was preparing. It sort of upset my routine, and I’m rather partial to my routines. I thought perhaps I wasn’t looking in the right place and changed Report Manager from Tile View to Detail View, but no – that label was still missing. In the grand scheme of life, it’s not an earth-shattering change, but you’ll have to look at the Modified Date in Details View to know when the snapshot was run. Or hope that the report developer included a textbox to show the execution time in the report. (Hint: this is a good time to add this to your list of report development best practices, whether a report gets set up as a report snapshot or not!) A snapshot from the past In case you don’t remember how a snapshot appeared in Report Manager back in the old days (of SQL Server 2008 and earlier), here’s an image I snagged from my Reporting Services 2008 Step by Step manuscript: A snapshot in the present A report server running in SharePoint integrated mode had no such label. There you had to rely on the Report Modified date-time stamp to know the snapshot execution time. So I guess all platforms are now consistent. Here’s a screenshot of Report Manager in the 2008 R2 version. One of these is a snapshot and the rest execute on demand. Can you tell which is the snapshot? Consider descriptions as an alternative So my report snapshot demonstration has one less step, and I’ll need to edit the Denali version of the Step by Step book. Things are simpler this way, but I sure wish we had an easier way to identify the execution methods of the reports. Consider using the description field to alert users that the report is a snapshot. It might save you a few questions about why the data isn’t up-to-date if the users know that something changed in the source of the report. Notice that the full description doesn’t display in Tile View, so keep it short and sweet or instruct users to open Details View to see the entire description.

    Read the article

  • Rails - How to use modal form to add object in one model, then reflect that change on main page?

    - by Jim
    I'm working on a Rails app and I've come across a situation where I'm unsure of the cleanest way to proceed. I posted a question on SO with code samples and such - it has received no answers, and the more I think about the problem, the more I think I might be approaching this the wrong way. (See the SO question at http://stackoverflow.com/questions/9521319/how-to-reference-form-when-rendering-partial-from-js-erb-file) So, in more of a generic architecture type question: Right now I have a form where a user can add a new recipe. The form also allows the user to select ingredients (it uses a collection_select which contains Ingredient.all). The catch is - I'd like the user to be able to add a new ingredient on the fly, without leaving the recipe form. Using a hidden div and some jQuery/AJAX, I have a link the user can click to popup a modal form containing ingredients/new.html.erb which is a simple form. When that form is submitted, I call ingredients/create.js.erb to validate the ingredient was saved and hide the modal div. Now I am back to my recipe form, but my collection_select hasn't updated. It seems I have a few choices here: try and re-render the collection_select portion of the form so it grabs a new list of ingredients. This was the method I was attempting when I wrote the SO question. The problem I run into is the partial I use for the collection_select needs the parent form passed in, and when I try and render from the JS file I don't know how to pass it the form object. Reload the recipe form. This works (the collection_select now contains the new ingredient), but the user loses any progress they made on the recipe form. I would need a way to persist the form data - I thought about manually passing the values back and forth, but that is sloppy and there has to be a better way... Try and manually insert the tags using jQuery - this would be simple, but because I'm allowing for multiple ingredients to be added, I can't be certain what ID to target. Now, I can't be the only person to have this issue - so is there an easier way I'm missing? I like option 2 above, but I don't know if there's an easy way to grab the entire params hash as if I had submitted the main recipes form. Hopefully someone can point me in the right direction so I can find an answer to this... If this doesn't make any sense at all, let me know - I can post code samples if you want, but most of the pertinent code is up on the SO question. Thanks!

    Read the article

  • Less Can Be More In E-Commerce

    - by Michael Hylton
    Today’s consumers are inundated with product choices and vendors. Visit your favorite electronics retailer and see the vast assortment of flat panel televisions. Or the variety of detergents at the supermarket. All of this can be daunting for the average consumer who is looking for the products and services that interest them.  In a study titled “Choice is Demotivating: Can One Desire Too Much of a Good Thing”, the author, Sheena Iyengar found that participants actually reported greater subsequent satisfaction with their selections and wrote better essays when their original set of options had been limited. The same can be said for e-commerce and your website. Being able to quickly convert shoppers into buyers with effective merchandising is what makes leading businesses successful. You want to engage each individual visitor with the most-relevant content to drive higher conversions and order values while decreasing abandonment, but predicting what will resonate with each customer is difficult. In a world of choices, online merchandizing tools can help personalize, streamline, and refine what your customers view when they browse your online catalog. The key to being effective is to align your products and content as closely as possible with the customer’s needs. The goal on the home page is to promote your brand and push visitors farther into the site. The home page is often the starting point for repeat customers as well as for new visitors hoping to address their current product needs. As the customer selects different filters and narrows the choices, valuable information is being provided to the retailer about the customer’s current need—regardless of previous search behavior or what other customers with a similar demographic profile have purchased. Together with search pages, category browse pages are among the primary options available to customers as a means of finding products on your site. Once a customer reaches the product detail page, it is clear what that person desires, regardless of the segment the customer falls into. However, don’t disregard campaign-based promotions completely. A campaign targeted to all customers but featuring rule-driven promotions tied to the product can be effective. Click here to learn more about merchandizing techniques so what your customer sees if half full and not half empty.

    Read the article

  • Less Can Be More In E-Commerce

    - by Michael Hylton
    Today’s consumers are inundated with product choices and vendors. Visit your favorite electronics retailer and see the vast assortment of flat panel televisions. Or the variety of detergents at the supermarket. All of this can be daunting for the average consumer who is looking for the products and services that interest them.  In a study titled “Choice is Demotivating: Can One Desire Too Much of a Good Thing”, the author, Sheena Iyengar found that participants actually reported greater subsequent satisfaction with their selections and wrote better essays when their original set of options had been limited. The same can be said for e-commerce and your website. Being able to quickly convert shoppers into buyers with effective merchandising is what makes leading businesses successful. You want to engage each individual visitor with the most-relevant content to drive higher conversions and order values while decreasing abandonment, but predicting what will resonate with each customer is difficult. In a world of choices, online merchandizing tools can help personalize, streamline, and refine what your customers view when they browse your online catalog. The key to being effective is to align your products and content as closely as possible with the customer’s needs. The goal on the home page is to promote your brand and push visitors farther into the site. The home page is often the starting point for repeat customers as well as for new visitors hoping to address their current product needs. As the customer selects different filters and narrows the choices, valuable information is being provided to the retailer about the customer’s current need—regardless of previous search behavior or what other customers with a similar demographic profile have purchased. Together with search pages, category browse pages are among the primary options available to customers as a means of finding products on your site. Once a customer reaches the product detail page, it is clear what that person desires, regardless of the segment the customer falls into. However, don’t disregard campaign-based promotions completely. A campaign targeted to all customers but featuring rule-driven promotions tied to the product can be effective. Click here to learn more about merchandizing techniques so what your customer sees if half full and not half empty.

    Read the article

  • Tell me a Story

    - by Geoff N. Hiten
    I recently had a friend ask me to review his resume.  He is a very experienced DBA with excellent skills.  If I had an opening I would have hired him myself.  But not because of the resume.  I know his skill set and skill levels, but there is no way his standard resume can convey that.  A bare bones list of job titles and skills does not set you apart from your competition, nor does it convey whether you have junior or senior level skills and experience.  The solution is to not use the standard format. Tell me a story.  I want to know what you were responsible for.  Describe a tough project and how you saved time/money/personnel on that project.  Link your work activity to business value.  Drop some technical bits in there since we do work in a technical field, but show me what you can do to add value to my business well above what I would pay you.  That will get my attention. The resume exists for one primary and one secondary reason.  The primary reason is to get the interview.  A Resume won’t get you a job, so don’t expect it to.  The secondary reason is to give you and the interviewer a starting point for conversations.  If I can say “Tell me more about when….” and reference an item from your resume, then that is great for both of us.  Of course, you better be able to tell me more, both from the technical and the business side, at least if I am hiring a senior or higher level position.  As for the junior DBAs, go ahead and tell your story too.  Don’t worry about how simple or basic your projects or solutions seem.  It is how you solved the problem and what you learned that I am looking for.  If you learn rapidly and think like a DBA, I can work with that, regardless of you current skill level.

    Read the article

  • Pro SharePoint 2010 Business Intelligence Solutions

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Oh yeah baby, it’s out finally! This book is what I wanted to write for so long now, but never really got a chance to. For SharePoint 2007, I authored the SharePoint section of “Smart BI Solutions with SQL Server 2008” for MS Press. But never really got the time, to author a full book that this topic deserved. Until SharePoint 2010, we actually have a full book on this topic. So first things first, I didn’t actually write it. My role was limited to the overall concept, the outline, the layout, completion of it, code samples, identifying what we need in here, vouching for technical accuracy, identifying authors etc. The real work was done by Srini (5 chapters), and Steve (1 chapter). So credit given where it is due. But, with that said, this is a pretty good book. It has always been a challenge to find the superman that knows both, data ware housing concepts, and SharePoint concepts. The data ware housing concepts include basic stuff you need to know to work in the BI area such as cubes, MDX queries, etc. So chapter 1 covers that – and if you’re a hardcore DBA, feel free to skip Chapter 1. Then beyond that, we take every single SharePoint 2010 BI topic, and slice and dice it in detail. The topics we deal with are - Visio Services Reporting services Business Connectivity Services Excel Services PerformancePoint Services And in covering each of these topics, we ensure that a general layout was followed for each topic, to ensure completeness of content. We make sure we cover Setup related issues and advice Point and click usage Code usage, i.e. extensibility using visual studio and a walkthrough of the administration side of things, including powershell. (Yes, I insisted on that in being there in every chapter). Writing a book is always a lot of work, so we hope you find it useful. And it should go very well with the other book I just reviewed, which is Microsoft ADO.NET 4, step by step. Comment on the article ....

    Read the article

  • Need help with a complex 3d scene (using Ogre and bullet)

    - by Matthias
    In my setup there is a box with a hole on one side, and a freely movable "stick" (or bar, tube). This stick can be inserted/moved through the hole into the box. This hole is exactly as wide as the diameter of the stick. In reality, when you would now hold the end of the stick in your hand and move the hand left/right or up/down, the other end of the stick, which is inside the box, would move into the opposite direction of your hand movement (because the stick is affixed at the pivot point where it is entering the box through the hole). (I hope you understand what I mean so far.) Now I need to simulate such a setup in a 3d program. I have already successfully developed an Ogre3d framework for this application, including bullet. But what I don't know is how I can implement in my program what I have described above. This application must include two more features: The scene camera is attached to the end of the stick that is inserted into the box. So when the user would move the mouse (to control "his" end of the stick outside the box), then the camera attached to the stick would move in the opposite direction, as described above. The stick has some length, and the user can push it further into the box, or pull it closer to him again. That means of course that the max. radius on which the end of the stick inside the box can move depends on how far the stick is pushed into the box. Thus, the more the stick is pushed into the box, the larger the max. radius of this end of the stick with the camera will be. I understand this is maybe quite a complex thing, so I don't expect any real source code here. I already have the Ogre and bullet part as said up and running, as well as a camera attached to the stick. This works fine. What I don't know though is how I can simulate the setup described above. Especially the requirement that the stick is affixed at the position of the hole on the box, where it is inserted into the box. Any ideas how I could approach to implement the described setup?

    Read the article

  • WAIT-VHUB ? Whats Going On ?

    - by Neeraj Gupta
    I know many of you have been working on Oracle's Exalogic and other Engineered Systems. With partitions enabled now, things have gone multi dimension. But its fun. Isn't it ? While you have some EoIB configurations together with InfiniBand partitions, the VNICs are not coming up and staying in WAIT-VHUB state ?  Chances are that you have forgot to add InfiniBand Gateway switches' Bridge-X port GUIDs to your partition. These must be added as FULL members for EoIB to work properly. VHUB means a virtual hub in EoIB. Bridge-x is the access point for hosts to work over EoIB so thats why it must be a full member in partition. Step 1: Find out the port GUIDs of your bridge-x devices in IB Gateway switch. # showgwports INTERNAL PORTS: --------------- Device   Port Portname  PeerPort PortGUID           LID    IBState  GWState --------------------------------------------------------------------------- Bridge-0  1   Bridge-0-1    4    0x0010e00c1b60c001 0x0002 Active   Up Bridge-0  2   Bridge-0-2    3    0x0010e00c1b60c002 0x0006 Active   Up Bridge-1  1   Bridge-1-1    2    0x0010e00c1b60c041 0x0026 Active   Up Bridge-1  2   Bridge-1-2    1    0x0010e00c1b60c042 0x002a Active   Up Step 2: Add these port GUIDs to the IB partition associated with EoIB. Login to master SM switch for this task. # smpartition start # smpartition add -pkey <PKey> -port <port GUID> -m full # smpartition commit Enjoy ! 

    Read the article

  • Loading levels from .txt or .XML for XNA

    - by Dave Voyles
    I'm attemptin to add multiple levels to my pong game. I'd like to simply exchange a few elements with each level, nothing crazy. Just the background texture, the color of the AI paddle (the one on the right side), and the music. It seems that the best way to go about this is by utilizing the StreamReader to read and write the files from XML. If there is a better, or more efficient alternative way then I'm all for it. In looking over the XNA Starter Platformer Kit provided by MS it seems that they've done it in this manner as well. I'm perplexed by a few things, however, namely parts within the Level class which aren't commented. /// <summary> /// Iterates over every tile in the structure file and loads its /// appearance and behavior. This method also validates that the /// file is well-formed with a player start point, exit, etc. /// </summary> /// <param name="fileStream"> /// A stream containing the tile data. /// </param> private void LoadTiles(Stream fileStream) { // Load the level and ensure all of the lines are the same length. int width; List<string> lines = new List<string>(); using (StreamReader reader = new StreamReader(fileStream)) { string line = reader.ReadLine(); width = line.Length; while (line != null) { lines.Add(line); if (line.Length != width) throw new Exception(String.Format("The length of line {0} is different from all preceeding lines.", lines.Count)); line = reader.ReadLine(); } } What does width = line.Length mean exactly? I mean I know how it reads the line, but what difference does it make if one line is longer than any of the others? Finally, their levels are simply text files that look like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### It can't be that easy..... Can it?

    Read the article

  • Minimizing Dependencies For GUIs

    - by tuba09
    I've been working on a project, and have been charged with designing the projects GUI front-end. I'm coding in Java and using the Swing toolkit. Usability-wise, the GUI front-end follows all of Nielsen's heuristics. Users can easily get to where they want to go through the click of a button / JComboBox. Essentially, in Swing terms, what happens is their actions drive the creation/deletion of custom panels. The GUI is coming along fine for the most part. However, I have to admit to being utterly dismayed at the tight web of dependencies my code is being smothered in. The main problem that I've encountered, that I haven't been able to fix as of yet, is how to keep a reference to the panels/buttons being changed. I'll give an example: Say there's a button A Say there's a panel B displaying picture C Say there's another picture D (not currently being displayed by panel B) When user clicks A, panel B should remove picture C and display picture D My question is, what's the best way of keeping track of panel B? Since I need a global point of access to panel B, my solution has so far been to just shoehorn it into a static variable, and access it through a series of static getters and setters. And this static variable is usually stored in the reference's original class. I.e. UserPanel has a static variable that stores a reference to itself. Is there an easy, tried-and-true way of dealing with these kinds of situations? Like my GUI works fine, but it is not modular and/or robust at all. To add to this, the dreaded 'cyclical dependencies' issue that's shunned by so many programmers is out here in full effect. I'm fairly new to development and just want to make sure that my code will be fairly extensible and won't cause much of a headache to the next person that decides to get a try at it. I know there's loads of books out there that probably have a nice elegant solution to this, but unfortunately I just don't have the time to leisure read right now. I need something that's quick and dirty. Thanks in advance

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >