Search Results

Search found 2929 results on 118 pages for 'avatar generation'.

Page 109/118 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • 10 tape technology features that make you go hmm.

    - by Karoly Vegh
    A week ago an Oracle/StorageTek Tape Specialist, Christian Vanden Balck, visited Vienna, and agreed to visit customers to do techtalks and update them about the technology boom going around tape. I had the privilege to attend some of his sessions and noted the information and features that took the customers by surprise and made them think. Allow me to share the top 10: I. StorageTek as a brand: StorageTek is one of he strongest names in the Tape field. The brand itself was valued so much by customers that even after Sun Microsystems acquiring StorageTek and the Oracle acquiring Sun the brand lives on with all the Oracle tapelibraries are officially branded StorageTek.See http://www.oracle.com/us/products/servers-storage/storage/tape-storage/overview/index.html II. Disk information density limitations: Disk technology struggles with information density. You haven't seen the disk sizes exploding lately, have you? That's partly because there are physical limits on a disk platter. The size is given, the number of platters is limited, they just can't grow, and are running out of physical area to write to. Now, in a T10000C tape cartridge we have over 1000m long tape. There you go, you have got your physical space and don't need to stuff all that data crammed together. You can write in a reliable pattern, and have space to grow too. III. Oracle has a market share of 62% worldwide in recording head manufacturing. That's right. If you are running LTO drives, with a good chance you rely on StorageTek production. That's two out of three LTO recording heads produced worldwide.  IV. You can store 1 Exabyte data in a single tape library. Yes, an Exabyte. That is 1000 Petabytes. Or, a million Terabytes. A thousand million GigaBytes. You can store that in a stacked StorageTek SL8500 tapelibrary. In one SL8500 you can put 10.000 T10000C cartridges, that store 10TB data (compressed). You can stack 10 of these SL8500s together. Boom. 1000.000 TB.(n.b.: stacking means interconnecting the libraries. Yes, cartridges are moved between the stacked libraries automatically.)  V. EMC: 'Tape doesn't suck after all. We moved on.': Do you remember the infamous 'Tape sucks, move on' Datadomain slogan? Of course they had to put it that way, having only had disk products. But here's a fun fact: on the EMCWorld 2012 there was a major presence of a Tape-tech company - EMC, in a sudden burst of sanity is embracing tape again. VI. The miraculous T10000C: Oracle StorageTek has developed an enterprise-grade tapedrive and cartridge, the T10000C. With awesome numbers: The Cartridge: Native 5TB capacity, 10TB with compression Over a kilometer long tape within the cartridge. And it's locked when unmounted, no rattling of your data.  Replaced the metalparticles datalayer with BaFe (bariumferrite) - metalparticles lose around 7% of magnetism within 30 days. BaFe does not. Yes we employ solid-state physicists doing R&D on demagnetisation in our labs. Can be partitioned, storage tiering within the cartridge!  The Drive: 2GB Cache Encryption implemented in HW - no performance hit 252 MB/s native sustained data rate, beats disk technology by far. Not to mention peak throughput.  Leading the tape while never touching the data side of it, protecting your data physically too Data integritiy checking (CRC recalculation) on tape within the drive without having to read it back to the server reordering data from tape-order, delivering it back in application-order  writing 32 tracks at once, reading them back for CRC check at once VII. You only use 20% of your data on a regular basis. The rest 80% is just lying around for years. On continuously spinning disks. Doubly consuming energy (power+cooling), blocking diskstorage capacity. There is a solution called SAM (Storage Archive Manager) that provides you a filesystem unifying disk and tape, moving data on-demand and for clients transparently between the different storage tiers. You can share these filesystems with NFS or CIFS for clients, and enjoy the low TCO of tape. Tapes don't spin. They sit quietly in their slots, storing 10TB data, using no energy, producing no heat, automounted when a client accesses their data.See: http://www.oracle.com/us/products/servers-storage/storage/storage-software/storage-archive-manager/overview/index.html VIII. HW supported for three decades: Did you know that the original PowderHorn library was released in '87 and has been only discontinued in 2010? That is over two decades of supported operation. Tape libraries are - just like the data carrying on tapecartridges - built for longevity. Oh, and the T10000C cartridge has 30-year archival life for long-term retention.  IX. Tape is easy to manage: Have you heard of Tape Storage Analytics? It is a central graphical tool to summarize, monitor, analyze dataflow, health and performance of drives and libraries, see: http://www.oracle.com/us/products/servers-storage/storage/tape-storage/tape-analytics/overview/index.html X. The next generation: The T10000B drives were able to reuse the T10000A cartridges and write on them even more data. On the same cartridges. We call this investment protection, and this is very important for Oracle for the future too. We usually support two generations of cartridges together. The current drive is a T10000C. (...I know I promised to enlist 10, but I got still two more I really want to mention. Allow me to work around the problem: ) X++. The TallBots, the robots moving around the cartridges in the StorageTek library from tapeslots to the drives are cableless. Cables, belts, chains running to moving parts in a library cause maintenance downtimes. So StorageTek eliminated them. The TallBots get power, commands, even firmwareupgrades through the rails they are running on. Also, the TallBots don't just hook'n'pull the tapes out of their slots, they actually grip'n'lift them out. No friction, no scratches, no zillion little plastic particles floating around in the library, in the drives, on your data. (X++)++: Tape beats SSDs and Disks. In terms of throughput (252 MB/s), in terms of TCO: disks cause around 290x more power and cooling, in terms of capacity: 10TB on a single media and soon more.  So... do you need to store large amounts of data? Are you legally bound to archive it for dozens of years? Would you benefit from automatic storage tiering? Have you got large mediachunks to be streamed at times? Have you got power and cooling issues in the growing datacenters? Do you find EMC's 180° turn of tape attitude interesting, but appreciate it at the same time? With all that, you aren't alone. The most data on this planet is stored on tape. Tape is coming. Big time.

    Read the article

  • The Future of Project Management is Social

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Kazim Isfahani, Director, Product Marketing, Oracle Rapid Ascent. Breakneck Speed. Lightning Fast. Perhaps even overwhelming. No matter which set of adjectives we use to describe it, social media’s rise into the enterprise mainstream has been unprecedented. Indeed, the big 4 social media powerhouses (Facebook, Google+, LinkedIn, and Twitter), have nearly 2 Billion users between them. You may be asking (as you should really) “That’s all well and good for the consumer, but for me at my company, what’s your point? Beyond the fact that I can check and post updates, that is.” Good question, kind sir. Impact of Social and Collaboration on Project Management I’ll dovetail this discussion to the project management realm, since that’s what I’m writing about. Speed is a big challenge for project-driven organizations. Anything that can help speed up project delivery - be it a new product introduction effort or a geographical expansion project - fast is a good thing. So where does this whole social thing fit particularly since there are already a host of tools to help with traditional project execution? The fact is companies have seen improvements in their productivity by deploying departmental collaboration and other social-oriented solutions. McKinsey’s survey on social tools shows we have reached critical scale: 72% of respondents report that their companies use at least one and over 40% say they are using social networks and blogs. We don’t hear as much about the impact of social media technologies at the project and project manager level, but that does not mean there is none. Consider the new hire. The type of individual entering the workforce and executing on projects is a generation of worker expecting visually appealing, easy to use and easy to understand technology meshing hand-in-hand with business processes. Consider the project manager. The social era has enhanced the role that the project manager must play. Today’s project manager must be a supreme communicator, an influencer, a sympathizer, a negotiator, and still manage to keep all stakeholders in the loop on project progress. Social tools play a significant role in this effort. Now consider the impact to the project team. The way that a project team functions has changed, with newer, social oriented technologies making the process of information dissemination and team communications much more fluid. It’s clear that a shift is occurring where “social” is intersecting with project management. The Rise of Social Project Management We refer to the melding of project management and social networking as Social Project Management. Social Project Management is based upon the philosophy that the project team is one part of an integrated whole, and that valuable and unique abilities exist within the larger organization. For this reason, Social Project Management systems should be integrated into the collaborative platform(s) of an organization, allowing communication to proceed outside the project boundaries. What makes social project management "social" is an implicit awareness where distributed teams build connected links in ways that were previously restricted to teams that were co-located. Just as critical, Social Project Management embraces the vision of seamless online collaboration within a project team, but also provides for, (and enhances) the use of rigorous project management techniques. Social Project Management acknowledges that projects (particularly large projects) are a social activity - people doing work with people, for other people, with commitments to yet other people. The more people (larger projects), the more interpersonal the interactions, and the more social affects the project. The Epitome of Social - Fusion Project Portfolio Management If I take this one level further to discuss Fusion Project Portfolio Management, the notion of Social Project Management is on full display. With Fusion Project Portfolio Management, project team members have a single place for interaction on projects and access to any other resources working within the Fusion ERP applications. This allows team members the opportunity to be informed with greater participation and provide better information. The application’s the visual appeal, and highly graphical nature makes it easy to navigate information. The project activity stream adds to the intuitive user experience. The goal of productivity is pervasive throughout Fusion Project Portfolio Management. Field research conducted with Oracle customers and partners showed that users needed a way to stay in the context of their core transactions and yet easily access social networking tools. This is manifested in the application so when a user executes a business process, they not only have the transactional application at their fingertips, but also have things like e-mail, SMS, text, instant messaging, chat – all providing a number of different ways to interact with people and/or groups of people, both internal and external to the project and enterprise. But in the end, connecting people is relatively easy. The larger issue is finding a way to serve up relevant, system-generated, actionable information, in real time, which will allow for more streamlined execution on key business processes. Fusion Project Portfolio Management’s design concept enables users to create project communities, establish discussion threads, manage event calendars as well as deliver project based work spaces to organize communications within the context of a project – all within a secure business environment. We’d love to hear from you and get your thoughts and ideas about how Social Project Management is impacting your organization. To learn more about Oracle Fusion Project Portfolio Management, please visit this link

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • CodePlex Daily Summary for Monday, August 11, 2014

    CodePlex Daily Summary for Monday, August 11, 2014Popular ReleasesSpace Engineers Server Manager: SESM V1.15: V1.15 - Updated Quartz library - Correct a bug in the new mod managment - Added a warning if you have backup enabled on a server but no static map configuredAspose for Apache POI: Missing Features of Apache POI SS - v 1.2: Release contain the Missing Features in Apache POI SS SDK in comparison with Aspose.Cells What's New ? Following Examples: Create Pivot Charts Detect Merged Cells Sort Data Printing Workbooks Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.AngularGo (SPA Project Template): AngularGo.VS2013.vsix: First ReleaseTouchmote: Touchmote 1.0 beta 13: Changes Less GPU usage Works together with other Xbox 360 controls Bug fixesPublic Key Infrastructure PowerShell module: PowerShell PKI Module v3.0: Important: I would like to hear more about what you are thinking about the project? I appreciate that you like it (2000 downloads over past 6 months), but may be you have to say something? What do you dislike in the module? Maybe you would love to see some new functionality? Tell, what you think! Installation guide:Use default installation path to install this module for current user only. To install this module for all users — enable "Install for all users" check-box in installation UI ...Modern UI for WPF: Modern UI 1.0.6: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. BREAKING CHANGE LinkGroup.GroupName renamed to GroupKey NEW FEATURES Improved rendering on high DPI screens, including support for per-monitor DPI awareness available in Windows 8.1 (see also Per-monitor DPI awareness) New ModernProgressRing control with 8 builtin styles New LinkCommands.NavigateLink routed command New Visual Studio project templates 'Modern UI WPF App' and 'Modern UI W...ClosedXML - The easy way to OpenXML: ClosedXML 0.74.0: Multiple thread safe improvements including AdjustToContents XLHelper XLColor_Static IntergerExtensions.ToStringLookup Exception now thrown when saving a workbook with no sheets, instead of creating a corrupt workbook Fix for hyperlinks with non-ASCII Characters Added basic workbook protection Fix for error thrown, when a spreadsheet contained comments and images Fix to Trim function Fix Invalid operation Exception thrown when the formula functions MAX, MIN, and AVG referenc...SEToolbox: SEToolbox 01.042.019 Release 1: Added RadioAntenna broadcast name to ship name detail. Added two additional columns for Asteroid material generation for Asteroid Fields. Added Mass and Block number columns to main display. Added Ellipsis to some columns on main display to reduce name confusion. Added correct SE version number in file when saving. Re-added in reattaching Motor when drag/dropping or importing ships (KeenSH have added RotorEntityId back in after removing it months ago). Added option to export and r...jQuery List DragSort: jQuery List DragSort 0.5.2: Fixed scrollContainer removing deprecated use of $.browser so should now work with latest version of jQuery. Added the ability to return false in dragEnd to revert sort order Project changes Added nuget package for dragsort https://www.nuget.org/packages/dragsort Converted repository from SVN to MercurialBraintree Client Library: Braintree 2.32.0: Allow credit card verification options to be passed outside of the nonce for PaymentMethod.create Allow billingaddress parameters and billingaddress_id to be passed outside of the nonce for PaymentMethod.create Add Subscriptions to paypal accounts Add PaymentMethod.update Add failonduplicatepaymentmethod option to PaymentMethod.create Add support for dispute webhooksThe Mario Kart 8 App: V1.0.2.1: First Codeplex release. WINDOWS INSTALLER ONLYAspose Java for Docx4j: Aspose.Words vs Docx4j - v 1.0: Release contain the Code Comparison for Features in Docx4j SDK and Aspose.Words What's New ?Following Examples: Accessing Document Properties Add Bookmarks Convert to Formats Delete Bookmarks Working with Comments Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.File System Security PowerShell Module: NTFSSecurity 2.4.1: Add-Access and Remove-Access now take multiple accoutsYourSqlDba: YourSqlDba 5.2.1.: This version improves alert message that comes a while after you install the script. First it says to get it from YourSqlDba.CodePlex.com If you don't want to update now, just-rerun the script from your installed version. To get actual version running just execute install.PrintVersionInfo. . You can go to source code / history and click on change set 72957 to see changes in the script.Manipulator: Manipulator: manipulatorXNB filetype plugin for Paint.NET: Paint.NET XNB plugin v0.4.0.0: CHANGELOG Reverted old incomplete changes. Updated library for compatibility with Paint .NET 4. Updated project to NET 4.5. Updated version to 0.4.0.0. INSTALLATION INSTRUCTIONS Extract the ZIP file to your Paint.NET\FileTypes folder.EdiFabric: Release 4.1: Changed MessageContextWix# (WixSharp) - managed interface for WiX: Release 1.0.0.0: Release 1.0.0.0 Custom UI Custom MSI Dialog Custom CLR Dialog External UIMath.NET Numerics: Math.NET Numerics v3.2.0: Linear Algebra: Vector.Map2 (map2 in F#), storage-optimized Linear Algebra: fix RemoveColumn/Row early index bound check (was not strict enough) Statistics: Entropy ~Jeff Mastry Interpolation: use Array.BinarySearch instead of local implementation ~Candy Chiu Resources: fix a corrupted exception message string Portable Build: support .Net 4.0 as well by using profile 328 instead of 344. .Net 3.5: F# extensions now support .Net 3.5 as well .Net 3.5: NuGet package now contains pro...babelua: 1.6.5.1: V1.6.5.1 - 2014.8.7New feature: Formatting code; Stability improvement: fix a bug that pop up error "System.Net.WebResponse EndGetResponse";New ProjectsDouDou: a little project.Dynamic MVC: Dynamically generate views from your model objects for a data centric MVC application.EasyDb - Simple Data Access: EasyDb is a simple library for data access that allows you to write less code.ExpressToAbroad: just go!!!!!Full Silverlight Web Video/Voice Conferencing: The Goal of this project is to provide complete Open Source (Voice/Video Chatting Client/Server) Modules Using SilverlightGaia: Gaia is an app for Windows plataform, Gaia is like Siri and Google Now or Betty but Gaia use only text commands.pxctest: pxctestSTACS: Career Management System for MIT by Team "STACS"StrongWorld: StrongWorld.WebSuiteXevas Tools: Xevas is a professional coders group of 'Nimbuzz'. We make all tools for worldwide users of nimbuzz at free of cost.????????: ????????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ???????????????: ????????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ????????????????: ?????????

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • General Policies and Procedures for Maintaining the Value of Data Assets

    Here is a general list for policies and procedures regarding maintaining the value of data assets. Data Backup Policies and Procedures Backups are very important when dealing with data because there is always the chance of losing data due to faulty hardware or a user activity. So the need for a strategic backup system should be mandatory for all companies. This being said, in the real world some companies that I have worked for do not really have a good data backup plan. Typically when companies tend to take this kind of approach in data backups usually the data is not really recoverable.  Unfortunately when companies do not regularly test their backup plans they get a false sense of security because they think that they are covered. However, I can tell you from personal and professional experience that a backup plan/system is never fully implemented until it is regularly tested prior to the time when it actually needs to be used. Disaster Recovery Plan Expanding on Backup Policies and Procedures, a company needs to also have a disaster recovery plan in order to protect its data in case of a catastrophic disaster.  Disaster recovery plans typically encompass how to restore all of a company’s data and infrastructure back to a restored operational status.  Most Disaster recovery plans also include time estimates on how long each step of the disaster recovery plan should take to be executed.  It is important to note that disaster recovery plans are never fully implemented until they have been tested just like backup plans. Disaster recovery plans should be tested regularly so that the business can be confident in not losing any or minimal data due to a catastrophic disaster. Firewall Policies and Content Filters One way companies can protect their data is by using a firewall to separate their internal network from the outside. Firewalls allow for enabling or disabling network access as data passes through it by applying various defined restrictions. Furthermore firewalls can also be used to prevent access from the internal network to the outside by these same factors. Common Firewall Restrictions Destination/Sender IP Address Destination/Sender Host Names Domain Names Network Ports Companies can also desire to restrict what their network user’s view on the internet through things like content filters. Content filters allow a company to track what webpages a person has accessed and can also restrict user’s access based on established rules set up in the content filter. This device and/or software can block access to domains or specific URLs based on a few factors. Common Content Filter Criteria Known malicious sites Specific Page Content Page Content Theme  Anti-Virus/Mal-ware Polices Fortunately, most companies utilize antivirus programs on all computers and servers for good reason, virus have been known to do the following: Corrupt/Invalidate Data, Destroy Data, and Steal Data. Anti-Virus applications are a great way to prevent any malicious application from being able to gain access to a company’s data.  However, anti-virus programs must be constantly updated because new viruses are always being created, and the anti-virus vendors need to distribute updates to their applications so that they can catch and remove them. Data Validation Policies and Procedures Data validation is very important to ensure that only accurate information is stored. The existence of invalid data can cause major problems when businesses attempt to use data for knowledge based decisions and for performance reporting. Data Scrubbing Policies and Procedures Data scrubbing is valuable to companies in one of two ways. The first can be used to clean data prior to being analyzed for report generation. The second is that it allows companies to remove things like personally Identifiable information from its data prior to transmit it between multiple environments or if the information is sent to an external location. An example of this can be seen with medical records in regards to HIPPA laws that prohibit the storage of specific personal and medical information. Additionally, I have professionally run in to a scenario where the Canadian government does not allow any Canadian’s personal information to be stored on a server not located in Canada. Encryption Practices The use of encryption is very valuable when a company needs to any personal information. This allows users with the appropriated access levels to view or confirm the existence or accuracy of data within a system by either decrypting the information or encrypting a piece of data and comparing it to the stored version.  Additionally, if for some unforeseen reason the data got in to the wrong hands then they would have to first decrypt the data before they could even be able to read it. Encryption just adds and additional layer of protection around data itself. Standard Normalization Practices The use of standard data normalization practices is very important when dealing with data because it can prevent allot of potential issues by eliminating the potential for unnecessary data duplication. Issues caused by data duplication include excess use of data storage, increased chance for invalidated data, and over use of data processing. Network and Database Security/Access Policies Every company has some form of network/data access policy even if they have none. These policies help secure data from being seen by inappropriate users along with preventing the data from being updated or deleted by users. In addition, without a good security policy there is a large potential for data to be corrupted by unassuming users or even stolen. Data Storage Policies Data storage polices are very important depending on how they are implemented especially when a company is trying to utilize them in conjunction with other policies like Data Backups. I have worked at companies where all network user folders are constantly backed up, and if a user wanted to ensure the existence of a piece of data in the form of a file then they had to store that file in their network folder. Conversely, I have also worked in places where when a user logs on or off of the network there entire user profile is backed up. Training Policies One of the biggest ways to prevent data loss and ensure that data will remain a company asset is through training. The practice of properly train employees on how to work with in systems that access data is crucial when trying to ensure a company’s data will remain an asset. Users need to be trained on how to manipulate a company’s data in order to perform their tasks to reduce the chances of invalidating data.

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • Restructuring a large Chrome Extension/WebApp

    - by A.M.K
    I have a very complex Chrome Extension that has gotten too large to maintain in its current format. I'd like to restructure it, but I'm 15 and this is the first webapp or extension of it's type I've built so I have no idea how to do it. TL;DR: I have a large/complex webapp I'd like to restructure and I don't know how to do it. Should I follow my current restructure plan (below)? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? While it isn't relevant to the question, the actual code is on Github and the extension is on the webstore. The basic structure is as follows: index.html <html> <head> <link href="css/style.css" rel="stylesheet" /> <!-- This holds the main app styles --> <link href="css/widgets.css" rel="stylesheet" /> <!-- And this one holds widget styles --> </head> <body class="unloaded"> <!-- Low-level base elements are "hardcoded" here, the unloaded class is used for transitions and is removed on load. i.e: --> <div class="tab-container" tabindex="-1"> <!-- Tab nav --> </div> <!-- Templates for all parts of the application and widgets are stored as elements here. I plan on changing these to <script> elements during the restructure since <template>'s need valid HTML. --> <template id="template.toolbar"> <!-- Template content --> </template> <!-- Templates end --> <!-- Plugins --> <script type="text/javascript" src="js/plugins.js"></script> <!-- This contains the code for all widgets, I plan on moving this online and downloading as necessary soon. --> <script type="text/javascript" src="js/widgets.js"></script> <!-- This contains the main application JS. --> <script type="text/javascript" src="js/script.js"></script> </body> </html> widgets.js (initLog || (window.initLog = [])).push([new Date().getTime(), "A log is kept during page load so performance can be analyzed and errors pinpointed"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in "store" name: "Weather", // Widget name interval: 300000, // Refresh interval nicename: "weather", // HTML and JS safe widget name sizes: ["tiny", "small", "medium"], // Available widget sizes desc: "Short widget description", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: "list", nicename: "location", label: "Location(s)", placeholder: "Enter a location and press Enter" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: "medium", location: ["San Francisco, CA"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; script.js (initLog || (window.initLog = [])).push([new Date().getTime(), "These are also at the end of every file"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log("Starting page generation"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log("Tabs rendered"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == "object") { processStorage(); } Objectives of the restructure Memory usage: Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. Code readability: At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. Module interdependence: Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. Fault tolerance: There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. How I think I should do it The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded = init). Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. Widget structure should be kept largely the same, but maybe they should also be split into their own files. AFAIK you can't load all templates in a folder, therefore they need to stay inline. Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. Question: Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?

    Read the article

  • JPA 2?EJB 3.1?JSF 2????????! WebLogic Server 12c?????????Java EE 6??????|WebLogic Channel|??????

    - by ???02
    2012?2???????????????WebLogic Server 12c?????????Java EE 6?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse 12c??WebLogic Server 12c(???)????Java EE 6??????3??????????????????????????????JPA 2.0??????????·?????????EJB 3.1???????·???????????????(???)???????O/R?????????????JPA 2.0 Java EE 6????????????????????Web?????????????3?????(3????)???????·????????????·????????????????????????????????JPA(Java Persistence API) 2.0???EJB(Enterprise JavaBeans) 3.1???JSF(JavaServer Faces) 2.0????3????????????????·???????????JPA??Java??????????????·?????????????O/R?????????????????????·???????????EJB?Session Bean??????????????????·??????????????????????JSF??????????????????????????????????????? ??????JPA????Oracle Database??EMPLOYEES?????Java??????????????????????Entity Bean??????XML?????????????????????????XML????????????????????????????????????????????????????·?????????????????????????????????????????????????????????????Java EE 6??????JPA 2.0??????????·???????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse(OEPE)??????File????????New?-?Other??????? ??????New??????????????????????????Web?-?Dynamic Web Project???????Next????????????????Dynamic Web Project?????????????Project name????OOW???????????Target Runtime????New Runtime????????? ???New Server Runtime Environment???????????????Oracle?-?Oracle WebLogic Server 12c(12.1.1)???????Next???????????????????????????WebLogic home????C:\Oracle\Middleware\wlserver_12.1???????Finish?????????????WebLogic Home????????????????????????Java home?????????????????????Finish??????????????????????Dynamic Web Project????????????????Finish??????????????????JPA 2.0??????????·?????? ???????????????JPA 2.0???????????????·??????????????????Eclipse??Project Explorer?(??????·???)?????????OOW?????????????????????????????·???????????????Properties?????????????????·???·????????????????????????????Project Facets?????????????JPA??????(?????????????Details?????JPA 2.0?????????????????????)???????????????????Further configuration available????????? ???Modify Faceted Project??????????????????????????????????Connection????????????????????????????Add Connection????????? ??????New Connection Profile????????????????Connection Profile Type????Oracle Database Connection??????Next???????????? ???Specify a Driver and Connection Details???????Drivers????Oracle Database 10g Driver Default???????????Properties?????????????????????SIDxeHostlocalhostPort number1521User nameHRPasswordhr ???????????Test Connection??????????????????Ping Succeeded!?????????????????????????????Finish???????????Modify Faceted Project????????OK????????????????Properties for OOW????????OK?????????????????? ?????????Eclipse????????????????OOW?????????????????·???????????????JPA Tools?-?Generate Entities from Tables...??????? ????Generate Custom Entities???????????????????????????????Schema????HR??????Tables????EMPLOYEES???????????Next???????????? ???????????Next???????????Customize Default Entity Generation??????Package????model???????Finish?????????????JPQL?????????? ?????????Oracle Database??EMPLOYEES??????????????????·????model.Employee.java?????????????????????????????????·?????OOW????Java Resources?-?src?-?model???????Employee.java????????????????????????????????·???Employee????(Employee.java)?package model; import java.io.Serializable; import java.math.BigDecimal; import java.util.Date; import java.util.Set; import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?// Apublic class Employee implements Serializable {        private static final long serialVersionUID = 1L;       @Id  // ?       @Column(name="EMPLOYEE_ID")        private long employeeId;        @Column(name="COMMISSION_PCT")        private BigDecimal commissionPct;        @Column(name="DEPARTMENT_ID")        private BigDecimal departmentId;        private String email;        @Column(name="FIRST_NAME")        private String firstName;       @Temporal( TemporalType.DATE)  //?       @Column(name="HIRE_DATE")        private Date hireDate;        @Column(name="JOB_ID")        private String jobId;        @Column(name="LAST_NAME")        private String lastName;        @Column(name="PHONE_NUMBER")        private String phoneNumber;        private BigDecimal salary;        //bi-directional many-to-one association to Employee<...?...>}  ???????????????·???????????????????????????????????????????@Table(name="")??????@Table??????????????????????????????????????? ?????????????????????????????????????·???????????????? ?????????????????????????????SQL?Data?????????? ???????????????A?????JPA?????????JPQL(Java Persistence Query Language)?????????????JPQL?????SQL???????????????????????????????????????????????????????????????????????????????????Employee.selectByNameEmployee??firstName????????????????????employeeId????????? ?????????????????????import java.util.Date;import java.util.Set;import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?@NamedQueries({       @NamedQuery(name="Employee.selectByName" , query="select e from Employee e where e.firstName like :name order by e.employeeId")})<...?...> ?????????·??????OOW?-?JPA Content?-?persistent.xml??????Connection???????????????Database????JTA data source:???jdbc/test????????????????????????Java EE 6??????JPA 2.0???????????????????????????????????·??????????????????????????????????????SQL????????????????????????·????????????·??????????????XML??????????????????1??????????????????????????????????????????????????????????????????EJB 3.1????????·???????????EJB 3.1????????·?????????????????EJB 3.1?Stateless Session Bean?????·????????????????·???????????????????·??????????????????? EJB3.1?????JPA 2.0???????????·???????????????????????XML???????????????????????????????EJB 3.1?????????·????EJB?????????????????????????????????????????????????????????????? ????????EJB 3.1?Session Bean?????·????????????????????????????????????????????????????public List<Employee> getEmp(String keyword)firstName????????????Employee?????? ????????????????????·???????????OOW????????????·???????????????New?-?Other???????????????????????????????????EJB?-?Session Bean(EJB 3.x)??????NEXT????????????????????Create EJB 3.x Session Bean?????????????Java Package????ejb???class name????EmpLogic???????????State Type????Stateless?????????No-interface???????????????????????Finish???????????? ?????????Stateless Session Bean??????·?????EmpLogic.java????????????????????EmpLogic????·????????EJB?????????????Stateless Session Bean?????????@Stateless?????????????????????????????????????EmpLogic????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       public EmpLogic() {       }} ??????????????????????????????????????·???????????????????????import??????????????????EmpLogic??????????????????????????·???????????????????????import????????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ?import javax.persistence.PersistenceContext;  // ?<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {      @PersistenceContext(unitName = "OOW")  // ?      private EntityManager em;  // ?       public EmpLogic() {       }} ?????????·???????JPA???????????????????·????????????????????????????CRUD???????????????????·????????????EntityManager???????????????????????????1????????????????·???????????????????????@PersistenceContext?????unitName?????????????persistence.xml????persistence-unit???name?????????????? ???????EmpLogic?????·???????????????????????????????????????????????????????????????????????????????EmpLogic????????·???????(EmpLogic.java)?package ejb;import java.util.List;  // ? import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ? import javax.persistence.PersistenceContext;  // ? <...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       @PersistenceContext(unitName = "OOW")  // ?        private EntityManager em;  // ?        public EmpLogic() {       }      @SuppressWarnings("unchecked")  // ?      public List<Employee> getEmp(String keyword) {  // ?             StringBuilder param = new StringBuilder();  // ?             param.append("%");  // ?             param.append(keyword);  // ?             param.append("%");  // ?             return em.createNamedQuery("Employee.selectByName")  // ?                    .setParameter("name", param.toString()).getResultList();  // ?      }} ???EJB 3.1???Stateless Session Bean?????????? ???JSF 2.0???????????????????????????????????????????????????JAX-RS????RESTful?Web??????????????????????

    Read the article

  • CSS Positioning

    - by Davey
    Trying to mess with this wordpress theme and can't figure out why the sidebar is stacking underneath the content block. Any help would be very appreciated. http://www.buffalostreetbooks.com/events CSS: body { font-family: Arial, Helvetica, Verdana, Sans-serif; font-size: 10pt; background-color: #692022; background-image:url("http://www.buffalostreetbooks.com/wp-content/themes/autumn-leaves/images/repeatflower.png"); } body,h1#blog-title { margin: 0; padding: 0; } a { color: blue; } a:hover { color: #FF8C00; } a img { border: 0 none; } #wrapper { width: 960px; margin: 0 auto; background-color: #F4FBF4; border-left: 1px solid #ccc; border-right: 1px solid #ccc; } #header { background-image:url("http://www.buffalostreetbooks.com/wp-content/themes/autumn-leaves/images/headertime.png"); width:768px; height: 200px; } #inner-header { padding: 125px 1em 0; } h1#blog-title { font-size: 2em; } h1#blog-title a { color: #800000; } .entry-title a { color: #CD853F; } h1#blog-title a, .entry-title a, #footer a { text-decoration: none; } h1#blog-title a:hover, .entry-title a:hover, #footer a:hover { text-decoration: underline; } div.skip-link { display: none; } #menu { border-bottom: 1px solid #ccc; } #menu a { color: #000; } #menu a:hover { text-decoration: underline; } #menu li.current_page_item a, #menu li.current_page_item a:hover { background-color: #DFC28B; text-decoration: none; } #content { padding: 1em; width:600px; } .entry-title { font-size: 1.5em; margin: 1em 0 0 0; } abbr.published { color: #666; border: 0 none; } .entry-meta, .entry-date { color: #666; } #comments-list .avatar { float: left; margin-right: 1em; } #comments-list .n { font-weight: bold; } .entry-meta, .comment-meta { font-style: italic; } #comments-list p { clear: left; } #primary { padding-left: 1em; font-size: 0.9em; border-left: 1px solid #ccc; border-bottom: 1px solid #ccc; background-color: #FFFACD; } #footer { text-align: center; font-size: 0.8em; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; margin-bottom: 1em; } #inner-footer { padding: 1em 0; } .entry-meta, .entry-meta a, .comment-meta, .comment-meta a, .sidebar, .sidebar a, #footer, #footer a { color: #666; } /* LAYOUT: Two-Column (Right) DESCRIPTION: Two-column fluid layout with one sidebars right of content */ div#container { margin:0 0 0 0; width:960px; height:100%; } div#content { margin:0 0 0 0; } div.sidebar { overflow:hidden; width:280px; min-height:500px; clear:both; } div#secondary { clear:right; } div#footer { clear:both; width:100%; } /* Just some example content */ div#menu { height:2em; width:100%; } div#menu ul,div#menu ul ul { line-height:2em; list-style:none; margin:0; padding:0; } div#menu ul a { display:block; margin-right:1em; padding:0 0.5em; text-decoration:none; } div#menu ul ul ul a { font-style:italic; } div#menu ul li ul { left:-999em; position:absolute; } div#menu ul li:hover ul { left:auto; } .entry-title,.entry-meta { clear:both; } div#primary { } form#commentform .form-label { margin:1em 0 0; } form#commentform span.required { background:#fff; color:#c30; } form#commentform,form#commentform p { padding:0; } input#author,input#email,input#url,textarea#comme nt { padding:0.2em; } div.comments ol li { margin:0 0 3.5em; } textarea#comment { height:13em; margin:0 0 0.5em; overflow:auto; width:66%; } .alignright,img.alignright{ float:right; margin:1em 0 0 1em; } .alignleft,img.alignleft{ float:left; margin:1em 1em 0 0; } .aligncenter,img.aligncenter{ display:block; margin:1em auto; text-align:center; } div.gallery { clear:both; height:180px; margin:1em 0; width:100%; } p.wp-caption-text{ font-style:italic; } div.gallery dl{ margin:1em auto; overflow:hidden; text-align:center; } div.gallery dl.gallery-columns-1 { width:100%; } div.gallery dl.gallery-columns-2 { width:49%; } div.gallery dl.gallery-columns-3 { width:33%; } div.gallery dl.gallery-columns-4 { width:24%; } div.gallery dl.gallery-columns-5 { width:19%; } div#nav-above { margin-bottom:1em; } div#nav-below { margin-top:1em; } div#nav-images { height:150px; margin:1em 0; } div.navigation { height:1.25em; } div.navigation div.nav-next { float:right; text-align:right; } div.sidebar h3 { font-size:1.2em; } div.sidebar input#s { width:7em; } div.sidebar li { list-style:none; margin:0 0 2em; } div.sidebar li form { margin:0.2em 0 0; padding:0; } div.sidebar ul ul { margin:0 0 0 2em; } div.sidebar ul ul li { list-style:disc; margin:0; } div.sidebar ul ul ul { margin:0 0 0 0.5em; } div.sidebar ul ul ul li { list-style:circle; } div#menu ul li,div.gallery dl,div.navigation div.nav-previous { float:left; } input#author,input#email,input#url,div.navigation div { width:50%; } div.gallery *,div.sidebar div,div.sidebar h3,div.sidebar ul { margin:0; padding:0; }

    Read the article

  • NetworkOnMainThreadException while using AsyncTask

    - by Fansher
    Im making an app that uses the internet to retrive information. I get an NetworkOnMainThreadException as i tried to run it on 3.0 and above and have therefore tried to set it up using AsyncTask, but it still gives the exception and i don't know what is wrong. Oddly enough i read on this thread Android NetworkOnMainThreadException inside of AsyncTask that if you just removes the android:targetSdkVersion="10" statement from the manifest file it will be able to run. This works but i don't find it as the right solution to solve the problem this way. So if anyone can tell me what im doing wrong with the AsyncTask i will really appriciate it. Also if there is anybody that knows why removing the statement in the manifest makes it work, im really interested in that also. My code looks like this: public class MainActivity extends Activity { static ArrayList<Tumblr> tumblrs; ListView listView; TextView footer; int offset = 0; ProgressDialog pDialog; View v; String responseBody = null; HttpResponse r; HttpEntity e; String searchUrl; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); final ConnectivityManager conMgr = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE); final NetworkInfo activeNetwork = conMgr.getActiveNetworkInfo(); if (activeNetwork != null && activeNetwork.isConnected()) { setContentView(R.layout.main); try { tumblrs = getTumblrs(); listView = (ListView) findViewById(R.id.list); View v = getLayoutInflater().inflate(R.layout.footer_layout, null); footer = (TextView) v.findViewById(R.id.tvFoot); listView.addFooterView(v); listView.setAdapter(new UserItemAdapter(this, R.layout.listitem)); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (JSONException e) { e.printStackTrace(); } new GetChicks().execute(); footer.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { new loadMoreListView().execute(); } }); } else { setContentView(R.layout.nonet); } } public class UserItemAdapter extends ArrayAdapter<Tumblr> { public UserItemAdapter(Context context, int imageViewResourceId) { super(context, imageViewResourceId, tumblrs); } @Override public View getView(int position, View convertView, ViewGroup parent) { View v = convertView; if (v == null) { LayoutInflater vi = (LayoutInflater) getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = vi.inflate(R.layout.listitem, null); } Tumblr tumblr = tumblrs.get(position); if (tumblr != null) { ImageView image = (ImageView) v.findViewById(R.id.avatar); if (image != null) { image.setImageBitmap(getBitmap(tumblr.image_url)); } } return v; } } public Bitmap getBitmap(String bitmapUrl) { try { URL url = new URL(bitmapUrl); return BitmapFactory.decodeStream(url.openConnection() .getInputStream()); } catch (Exception ex) { return null; } } public ArrayList<Tumblr> getTumblrs() throws ClientProtocolException, IOException, JSONException { searchUrl = "http://api.tumblr.com/v2/blog/"webside"/posts?api_key=API_KEY"; ArrayList<Tumblr> tumblrs = new ArrayList<Tumblr>(); return tumblrs; } private class GetChicks extends AsyncTask<Void, Void, Void> { @Override protected Void doInBackground(Void... unused) { // TODO Auto-generated method stub runOnUiThread(new Runnable() { public void run() { HttpClient client = new DefaultHttpClient(); HttpGet get = new HttpGet(searchUrl); HttpResponse r = null; try { r = client.execute(get); int status = r.getStatusLine().getStatusCode(); if (status == 200) { e = r.getEntity(); responseBody = EntityUtils.toString(e); } } catch (ClientProtocolException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (IOException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } JSONObject jsonObject; try { jsonObject = new JSONObject(responseBody); JSONArray posts = jsonObject.getJSONObject("response") .getJSONArray("posts"); for (int i = 0; i < posts.length(); i++) { JSONArray photos = posts.getJSONObject(i) .getJSONArray("photos"); for (int j = 0; j < photos.length(); j++) { JSONObject photo = photos.getJSONObject(j); String url = photo.getJSONArray("alt_sizes") .getJSONObject(0).getString("url"); Tumblr tumblr = new Tumblr(url); tumblrs.add(tumblr); } } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }); return null; } } public class Tumblr { public String image_url; public Tumblr(String url) { this.image_url = url; } } private class loadMoreListView extends AsyncTask<Void, Void, Void> { @Override protected void onPreExecute() { // Showing progress dialog before sending http request pDialog = new ProgressDialog(MainActivity.this); pDialog.setMessage("More chicks coming up.."); pDialog.setIndeterminate(true); pDialog.setCancelable(false); pDialog.show(); } @Override protected Void doInBackground(Void... unused) { // TODO Auto-generated method stub runOnUiThread(new Runnable() { public void run() { // increment current page offset += 2; // Next page request tumblrs.clear(); String searchUrl = "http://api.tumblr.com/v2/blog/"webside"/posts?api_key=API_KEY&limit=2 + offset; HttpClient client = new DefaultHttpClient(); HttpGet get = new HttpGet(searchUrl); HttpResponse r = null; try { r = client.execute(get); int status = r.getStatusLine().getStatusCode(); if (status == 200) { HttpEntity e = r.getEntity(); responseBody = EntityUtils.toString(e); } } catch (ClientProtocolException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (IOException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } JSONObject jsonObject; try { jsonObject = new JSONObject(responseBody); JSONArray posts = jsonObject.getJSONObject("response") .getJSONArray("posts"); for (int i = 0; i < posts.length(); i++) { JSONArray photos = posts.getJSONObject(i) .getJSONArray("photos"); for (int j = 0; j < photos.length(); j++) { JSONObject photo = photos.getJSONObject(j); String url = photo.getJSONArray("alt_sizes") .getJSONObject(0).getString("url"); Tumblr tumblr = new Tumblr(url); tumblrs.add(tumblr); } } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } // Setting new scroll position listView.setSelectionFromTop(0, 0); } }); return null; } protected void onPostExecute(Void unused) { pDialog.dismiss(); } } @Override public boolean onCreateOptionsMenu(android.view.Menu menu) { // TODO Auto-generated method stub super.onCreateOptionsMenu(menu); MenuInflater blowUp = getMenuInflater(); blowUp.inflate(R.menu.cool_menu, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // TODO Auto-generated method stub switch (item.getItemId()) { case R.id.aboutUs: Intent i = new Intent("com.example.example.ABOUT"); startActivity(i); break; case R.id.refresh: Intent f = new Intent(MainActivity.this, MainActivity.class); startActivity(f); finish(); break; case R.id.exit: finish(); break; } return false; } } Thanks for helping out.

    Read the article

  • "power limit notification" clobbering on 12G Dell servers with RHEL6

    - by Andrew B
    Server: Poweredge r620 OS: RHEL 6.4 Kernel: 2.6.32-358.18.1.el6.x86_64 I'm experiencing application alarms in my production environment. Critical CPU hungry processes are being starved of resources and causing a processing backlog. The problem is happening on all the 12th Generation Dell servers (r620s) in a recently deployed cluster. As near as I can tell, instances of this happening are matching up to peak CPU utilization, accompanied by massive amounts of "power limit notification" spam in dmesg. An excerpt of one of these events: Nov 7 10:15:15 someserver [.crit] CPU12: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU6: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU14: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Package power limit notification (total events = 11) Nov 7 10:15:15 someserver [.crit] CPU6: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU14: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU20: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Package power limit notification (total events = 12) Nov 7 10:15:15 someserver [.crit] CPU10: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU20: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU10: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU15: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU3: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU1: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU5: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU15: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU3: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU1: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU5: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU19: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Package power limit notification (total events = 377) Nov 7 10:15:15 someserver [.crit] CPU9: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU21: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU23: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU11: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU19: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU9: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU21: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU23: Package power limit notification (total events = 374) A little Google Fu reveals that this is typically associated with the CPU running hot, or voltage regulation kicking in. I don't think that's what is happening though. Temperature sensors for all servers in the cluster are running fine, Power Cap Policy is disabled in the iDRAC, and my System Profile is set to "Performance" on all of these servers: # omreport chassis biossetup | grep -A10 'System Profile' System Profile Settings ------------------------------------------ System Profile : Performance CPU Power Management : Maximum Performance Memory Frequency : Maximum Performance Turbo Boost : Enabled C1E : Disabled C States : Disabled Monitor/Mwait : Enabled Memory Patrol Scrub : Standard Memory Refresh Rate : 1x Memory Operating Voltage : Auto Collaborative CPU Performance Control : Disabled A Dell mailing list post describes the symptoms almost perfectly. Dell suggested that the author try using the Performance profile, but that didn't help. He ended up applying some settings in Dell's guide for configuring a server for low latency environments and one of those settings (or a combination thereof) seems to have fixed the problem. Kernel.org bug #36182 notes that power-limit interrupt debugging was enabled by default, which is causing performance degradation in scenarios where CPU voltage regulation is kicking in. A RHN KB article (RHN login required) mentions a problem impacting PE r620 and r720 servers not running the Performance profile, and recommends an update to a kernel released two weeks ago. ...Except we are running the Performance profile... Everything I can find online is running me in circles here. What's the heck is going on?

    Read the article

  • Can someone explain the "use-cases" for the default munin graphs?

    - by exhuma
    When installing munin, it activates a default set of plugins (at least on ubuntu). Alternatively, you can simply run munin-node-configure to figure out which plugins are supported on your system. Most of these plugins plot straight-forward data. My question is not to explain the nature of the data (well... maybe for some) but what is it that you look for in these graphs? It is easy to install munin and see fancy graphs. But having the graphs and not being able to "read" them renders them totally useless. I am going to list standard plugins which are enabled by default on my system. So it's going to be a long list. For completeness, I am also going to list plugins which I think to understand and give a short explanation as to what I think it's used for. Pleas correct if I am wrong with any of them. So let me split this questions in three parts: Plugins where I don't even understand the data Plugins where I understand the data but don't know what I should look out for Plugins which I think to understand Plugins where I don't even understand the data These may contain questions that are not necessarily aimed at munin alone. Not understanding the data usually mean a gap in fundamental knowledge on operating systems/hardware.... ;) Feel free to respond with a "giyf" answer. These are plugins where I can only guess what's going on... I hardly want to look at these "guessing"... Disk IOs per device (IOs/second)What's an IO. I know it stands for input/output. But that's as far as it goes. Disk latency per device (Average IO wait)Not a clue what an "IO wait" is... IO Service TimeThis one is a huge mess, and it's near impossible to see something in the graph at all. Plugins where I understand the data but don't know what I should look out for IOStat (blocks/second read/written)I assume, the thing to look out for in here are spikes? Which would mean that the device is in heavy use? Available entropy (bytes)I assume that this is important for random number generation? Why would I graph this? So far the value has always been near constant. VMStat (running/I/O sleep processes)What's the difference between this one and the "processes" graph? Both show running/sleeping processes, whereas the "Processes" graph seems to have more details. Disk throughput per device (bytes/second read/written) What's thedifference between this one and the "IOStat" graph? inode table usageWhat should I look for in this graph? Plugins which I think to understand I'll be guessing some things here... correct me if I am wrong. Disk usage in percent (percent)How much disk space is used/remaining. As this is approaching 100%, you should consider cleaning up or extend the partition. This is extremely important for the root partition. Firewall Throughput (packets/second)The number of packets passing through the firewall. If this is spiking for a longer period of time, it could be a sign of a DOS attack (or we are simply recieving a large file). It can also give you an idea about your firewall performance. If it's levelling out and you need more "power" you should consider load balancing. If it's levelling out and see a correlation with your CPU load, it could also mean that your hardware is not fast enough. Correlations with disk usage could point to excessive LOG targets in you FW config. eth0 errors (packets in/out)Network errors. If this value is increasing, it could be a sign of faulty hardware. eth0 traffic (bits/second in/out)Raw network traffic. This should correlate with Firewall throughput. number of threadsAn ever-increasing value might point to a process not properly closing threads. Investigate! processesBreakdown of active processes (including sleeping). A quick spike in here might point to a fork-bomb. A slowly, but ever-increasing value might point to an application spawning sub-processes but not properly closing them. Investigate using ps faux. process priorityThis shows the distribution of process priorities. Having only high-priority processes is not of much use. Consider de-prioritizing some. cpu usageFairly straight-forward. If this is spiking, you may have an attack going on, or a process is hogging the CPU. Idf it's slowly increasing and approaching max in normal operations, you should consider upgrading your hardware (or load-balancing). file table usageNumber of actively open files. If this is reaching max, you may have a process opening, but not properly releasing files. load averageShows an summarized value for the system load. Should correlate with CPU usage. Increasing values can come from a number of sources. Look for correlations with other graphs. memory usageA graphical representation of you memory. As long as you have a lot of unused+cache+buffers you are fine. swap in/outShows the activity on your swap partition. This should always be 0. If you see activity on this, you should add more memory to your machine!

    Read the article

  • Could not load file or assembly 'System.Data.SQLite'

    - by J. Pablo Fernández
    I've installed ELMAH 1.1 .Net 3.5 x64 in my ASP.NET project and now I'm getting this error (whenever I try to see any page): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format. More error details at the bottom. My Active Solution platform is "Any CPU" and I'm running on a x64 Windows 7 on an x64, of course, processor. The reason why we are using this version of ELMAH is because 1.0 .Net 3.5 (x86, which is the only platform for which it's compiled) gave us this same error on our x64 Windows server. I've tried compiling for x86 and x64 and I get the same error. I've tried removing the all compiler output (bin and obj). Finally I've made a reference to the SQLite dll directly, something that was not needed for the project to work on the server and I've got this compiler error: Error 1 Warning as Error: Assembly generation -- Referenced assembly 'System.Data.SQLite.dll' targets a different processor MyProject Any ideas what the problem might be? More error details: Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [BadImageFormatException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +0 System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +43 System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +127 System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +142 System.Reflection.Assembly.Load(String assemblyString) +28 System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +46 [ConfigurationErrorsException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +613 System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() +203 System.Web.Configuration.CompilationSection.LoadAssembly(AssemblyInfo ai) +105 System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) +178 System.Web.Compilation.BuildProvidersCompiler..ctor(VirtualPath configPath, Boolean supportLocalization, String outputAssemblyName) +54 System.Web.Compilation.ApplicationBuildProvider.GetGlobalAsaxBuildResult(Boolean isPrecompiledApp) +232 System.Web.Compilation.BuildManager.CompileGlobalAsax() +52 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +337 [HttpException (0x80004005): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() +58 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +512 System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) +729 [HttpException (0x80004005): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +8896783 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +85 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +259

    Read the article

  • use svcutil to map multiple namespaces for generating wcf service proxies

    - by Pratik
    I want to use svcutil to map multiple wsdl namespace to clr namespace when generating service proxies. I use strong versioning of namespaces and hence the generated clr namespaces are awkward and may mean many client side code changes if the wsdl/xsd namespace version changes. A code example would be better to show what I want. // Service code namespace TestService.StoreService { [DataContract(Namespace = "http://mydomain.com/xsd/Model/Store/2009/07/01")] public class Address { [DataMember(IsRequired = true, Order = 0)] public string street { get; set; } } [ServiceContract(Namespace = "http://mydomain.com/wsdl/StoreService-v1.0")] public interface IStoreService { [OperationContract] List<Customer> GetAllCustomersForStore(int storeId); [OperationContract] Address GetStoreAddress(int storeId); } public class StoreService : IStoreService { public List<Customer> GetAllCustomersForStore(int storeId) { throw new NotImplementedException(); } public Address GetStoreAddress(int storeId) { throw new NotImplementedException(); } } } namespace TestService.CustomerService { [DataContract(Namespace = "http://mydomain.com/xsd/Model/Customer/2009/07/01")] public class Address { [DataMember(IsRequired = true, Order = 0)] public string city { get; set; } } [ServiceContract(Namespace = "http://mydomain.com/wsdl/CustomerService-v1.0")] public interface ICustomerService { [OperationContract] Customer GetCustomer(int customerId); [OperationContract] Address GetStoreAddress(int customerId); } public class CustomerService : ICustomerService { public Customer GetCustomer(int customerId) { throw new NotImplementedException(); } public Address GetStoreAddress(int customerId) { throw new NotImplementedException(); } } } namespace TestService.Shared { [DataContract(Namespace = "http://mydomain.com/xsd/Model/Shared/2009/07/01")] public class Customer { [DataMember(IsRequired = true, Order = 0)] public int CustomerId { get; set; } [DataMember(IsRequired = true, Order = 1)] public string FirstName { get; set; } } } 1. svcutil - without namespace mapping svcutil.exe /t:metadata TestSvcUtil\bin\debug\TestService.CustomerService.dll TestSvcUtil\bin\debug\TestService.StoreService.dll svcutil.exe /t:code *.wsdl *.xsd /o:TestClient\WebServiceProxy.cs The generated proxy looks like namespace mydomain.com.xsd.Model.Shared._2009._07._011 { public partial class Customer{} } namespace mydomain.com.xsd.Model.Customer._2009._07._011 { public partial class Address{} } namespace mydomain.com.xsd.Model.Store._2009._07._011 { public partial class Address{} } The client classes are out of any namespaces. Any change to xsd namespace would imply changing all using statements in my client code all build will break. 2. svcutil - with wildcard namespace mapping svcutil.exe /t:metadata TestSvcUtil\bin\debug\TestService.CustomerService.dll TestSvcUtil\bin\debug\TestService.StoreService.dll svcutil.exe /t:code *.wsdl *.xsd /n:*,MyDomain.ServiceProxy /o:TestClient\WebServicesProxy2.cs The generated proxy looks like namespace MyDomain.ServiceProxy { public partial class Customer{} public partial class Address{} public partial class Address1{} public partial class CustomerServiceClient{} public partial class StoreServiceClient{} } Notice that svcutil has automatically changed one of the Address class to Address1. I don't like this. All client classes are also inside the same namespace. What I want Something like this: svcutil.exe /t:code *.wsdl *.xsd /n:"http://mydomain.com/xsd/Model/Shared/2009/07/01, MyDomain.Model.Shared;http://mydomain.com/xsd/Model/Customer/2009/07/01, MyDomain.Model.Customer;http://mydomain.com/wsdl/CustomerService-v1.0, MyDomain.CustomerServiceProxy;http://mydomain.com/xsd/Model/Store/2009/07/01, MyDomain.Model.Store;http://mydomain.com/wsdl/StoreService-v1.0, MyDomain.StoreServiceProxy" /o:TestClient\WebServiceProxy3.cs This way I can logically group the clr namespace and any change to wsdl/xsd namespace is handled in the proxy generation only without affecting the rest of the client side code. Now this is not possible. The svcutil allows to map only one or all namespaces, not a list of mappings. I can do one mapping as shown below but not multiple svcutil.exe /t:code *.wsdl *.xsd /n:"http://mydomain.com/xsd/Model/Store/2009/07/01, MyDomain.Model.Address" /o:TestClient\WebServiceProxy4.cs But is there any solution. Svcutil is not magic, it is written in .Net and programatically generating the proxies. Has anyone written an alternate to svcutil or point me to directions so that I can write one.

    Read the article

  • NullPointerException in com.sun.tools.jxc.SchemaGenTask

    - by David Collie
    Given this ant script: <?xml version="1.0" encoding="UTF-8"?> <project name="projectname" default="generate-schema" basedir="."> <taskdef name="schemagen" classname="com.sun.tools.jxc.SchemaGenTask"> <classpath> <fileset dir="../BuildJars/lib" includes="*.jar" /> </classpath> </taskdef> <target name="generate-schema"> <schemagen srcdir="src/gb/informaticasystems/messages" destdir="schema"> <schema namespace="http://www.informatica-systems.co.uk/aquarius/messages/1.0" file="messages-1.0.xsd" /> </schemagen> </target> </project> I am getting this error: Buildfile: C:\Users\davidcollie\workspace\aquarius-feature\AquariusServerLibrary\schemagen.xml generate-schema: [schemagen] Generating schema from 4 source files [schemagen] Problem encountered during annotation processing; [schemagen] see stacktrace below for more information. [schemagen] java.lang.NullPointerException [schemagen] at com.sun.tools.jxc.model.nav.APTNavigator$2.onDeclaredType(APTNavigator.java:428) [schemagen] at com.sun.tools.jxc.model.nav.APTNavigator$2.onClassType(APTNavigator.java:402) [schemagen] at com.sun.tools.jxc.model.nav.APTNavigator$2.onClassType(APTNavigator.java:456) [schemagen] at com.sun.istack.tools.APTTypeVisitor.apply(APTTypeVisitor.java:27) [schemagen] at com.sun.tools.jxc.model.nav.APTNavigator.getBaseClass(APTNavigator.java:109) [schemagen] at com.sun.tools.jxc.model.nav.APTNavigator.getBaseClass(APTNavigator.java:85) [schemagen] at com.sun.xml.bind.v2.model.impl.PropertyInfoImpl.getIndividualType(PropertyInfoImpl.java:190) [schemagen] at com.sun.xml.bind.v2.model.impl.PropertyInfoImpl.<init>(PropertyInfoImpl.java:132) [schemagen] at com.sun.xml.bind.v2.model.impl.MapPropertyInfoImpl.<init>(MapPropertyInfoImpl.java:67) [schemagen] at com.sun.xml.bind.v2.model.impl.ClassInfoImpl.createMapProperty(ClassInfoImpl.java:917) [schemagen] at com.sun.xml.bind.v2.model.impl.ClassInfoImpl.addProperty(ClassInfoImpl.java:874) [schemagen] at com.sun.xml.bind.v2.model.impl.ClassInfoImpl.findGetterSetterProperties(ClassInfoImpl.java:993) [schemagen] at com.sun.xml.bind.v2.model.impl.ClassInfoImpl.getProperties(ClassInfoImpl.java:303) [schemagen] at com.sun.xml.bind.v2.model.impl.ModelBuilder.getClassInfo(ModelBuilder.java:243) [schemagen] at com.sun.xml.bind.v2.model.impl.ModelBuilder.getClassInfo(ModelBuilder.java:209) [schemagen] at com.sun.xml.bind.v2.model.impl.ModelBuilder.getTypeInfo(ModelBuilder.java:315) [schemagen] at com.sun.xml.bind.v2.model.impl.ModelBuilder.getTypeInfo(ModelBuilder.java:330) [schemagen] at com.sun.tools.xjc.api.impl.j2s.JavaCompilerImpl.bind(JavaCompilerImpl.java:90) [schemagen] at com.sun.tools.jxc.apt.SchemaGenerator$1.process(SchemaGenerator.java:115) [schemagen] at com.sun.mirror.apt.AnnotationProcessors$CompositeAnnotationProcessor.process(AnnotationProcessors.java:60) [schemagen] at com.sun.tools.apt.comp.Apt.main(Apt.java:454) [schemagen] at com.sun.tools.apt.main.JavaCompiler.compile(JavaCompiler.java:258) [schemagen] at com.sun.tools.apt.main.Main.compile(Main.java:1102) [schemagen] at com.sun.tools.apt.main.Main.compile(Main.java:964) [schemagen] at com.sun.tools.apt.Main.processing(Main.java:95) [schemagen] at com.sun.tools.apt.Main.process(Main.java:85) [schemagen] at com.sun.tools.apt.Main.process(Main.java:67) [schemagen] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [schemagen] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [schemagen] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [schemagen] at java.lang.reflect.Method.invoke(Method.java:597) [schemagen] at com.sun.tools.jxc.AptBasedTask$InternalAptAdapter.execute(AptBasedTask.java:97) [schemagen] at com.sun.tools.jxc.AptBasedTask.compile(AptBasedTask.java:144) [schemagen] at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:820) [schemagen] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [schemagen] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [schemagen] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [schemagen] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [schemagen] at java.lang.reflect.Method.invoke(Method.java:597) [schemagen] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) [schemagen] at org.apache.tools.ant.Task.perform(Task.java:348) [schemagen] at org.apache.tools.ant.Target.execute(Target.java:357) [schemagen] at org.apache.tools.ant.Target.performTasks(Target.java:385) [schemagen] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1329) [schemagen] at org.apache.tools.ant.Project.executeTarget(Project.java:1298) [schemagen] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [schemagen] at org.eclipse.ant.internal.ui.antsupport.EclipseDefaultExecutor.executeTargets(EclipseDefaultExecutor.java:32) [schemagen] at org.apache.tools.ant.Project.executeTargets(Project.java:1181) [schemagen] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:423) [schemagen] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:137) BUILD FAILED C:\Users\davidcollie\workspace\aquarius-feature\AquariusServerLibrary\schemagen.xml:11: schema generation failed Total time: 1 second Any ideas?

    Read the article

  • Rendering a random generated maze in WinForms.NET

    - by Claus Jørgensen
    Hi I'm trying to create a maze-generator, and for this I have implemented the Randomized Prim's Algorithm in C#. However, the result of the generation is invalid. I can't figure out if it's my rendering, or the implementation that's invalid. So for starters, I'd like to have someone take a look at the implementation: maze is a matrix of cells. var cell = maze[0, 0]; cell.Connected = true; var walls = new HashSet<MazeWall>(cell.Walls); while (walls.Count > 0) { var randomWall = walls.GetRandom(); var randomCell = randomWall.A.Connected ? randomWall.B : randomWall.A; if (!randomCell.Connected) { randomWall.IsPassage = true; randomCell.Connected = true; foreach (var wall in randomCell.Walls) walls.Add(wall); } walls.Remove(randomWall); } Here's a example on the rendered result: Edit Ok, lets have a look at the rendering part then: private void MazePanel_Paint(object sender, PaintEventArgs e) { int size = 20; int cellSize = 10; MazeCell[,] maze = RandomizedPrimsGenerator.Generate(size); mazePanel.Size = new Size( size * cellSize + 1, size * cellSize + 1 ); e.Graphics.DrawRectangle(Pens.Blue, 0, 0, size * cellSize, size * cellSize ); for (int y = 0; y < size; y++) for (int x = 0; x < size; x++) { foreach(var wall in maze[x, y].Walls.Where(w => !w.IsPassage)) { if (wall.Direction == MazeWallOrientation.Horisontal) { e.Graphics.DrawLine(Pens.Blue, x * cellSize, y * cellSize, x * cellSize + cellSize, y * cellSize ); } else { e.Graphics.DrawLine(Pens.Blue, x * cellSize, y * cellSize, x * cellSize, y * cellSize + cellSize ); } } } } And I guess, to understand this we need to see the MazeCell and MazeWall class: namespace MazeGenerator.Maze { class MazeCell { public int Column { get; set; } public int Row { get; set; } public bool Connected { get; set; } private List<MazeWall> walls = new List<MazeWall>(); public List<MazeWall> Walls { get { return walls; } set { walls = value; } } public MazeCell() { this.Connected = false; } public void AddWall(MazeCell b) { walls.Add(new MazeWall(this, b)); } } enum MazeWallOrientation { Horisontal, Vertical, Undefined } class MazeWall : IEquatable<MazeWall> { public IEnumerable<MazeCell> Cells { get { yield return CellA; yield return CellB; } } public MazeCell CellA { get; set; } public MazeCell CellB { get; set; } public bool IsPassage { get; set; } public MazeWallOrientation Direction { get { if (CellA.Column == CellB.Column) { return MazeWallOrientation.Horisontal; } else if (CellA.Row == CellB.Row) { return MazeWallOrientation.Vertical; } else { return MazeWallOrientation.Undefined; } } } public MazeWall(MazeCell a, MazeCell b) { this.CellA = a; this.CellB = b; a.Walls.Add(this); b.Walls.Add(this); IsPassage = false; } #region IEquatable<MazeWall> Members public bool Equals(MazeWall other) { return (this.CellA == other.CellA) && (this.CellB == other.CellB); } #endregion } }

    Read the article

  • Receicing POST data in ASP.NET

    - by grast
    Hi, I want to use ASP for code generation in a C# desktop application. To achieve this, I set up a simple host (derived from System.MarshalByRefObject) that processes a System.Web.Hosting.SimpleWorkerRequest via HttpRuntime.ProcessRequest. This processes the ASPX script specified by the incoming request (using System.Net.HttpListener to wait for requests). The client-part is represented by a System.ComponentModel.BackgroundWorker that builds the System.Net.HttpWebRequest and receives the response from the server. A simplified version of my client-part-code looks like this: private void SendRequest(object sender, DoWorkEventArgs e) { // create request with GET parameter var uri = "http://localhost:9876/test.aspx?getTest=321"; var request = (HttpWebRequest)WebRequest.Create(uri); // append POST parameter request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; var postData = Encoding.Default.GetBytes("postTest=654"); var postDataStream = request.GetRequestStream(); postDataStream.Write(postData, 0, postData.Length); // send request, wait for response and store/print content using (var response = (HttpWebResponse)request.GetResponse()) { using (var reader = new StreamReader(response.GetResponseStream(), Encoding.UTF8)) { _processsedContent = reader.ReadToEnd(); Debug.Print(_processsedContent); } } } My server-part-code looks like this (without exception-handling etc.): public void ProcessRequests() { // HttpListener at http://localhost:9876/ var listener = SetupListener(); // SimpleHost created by ApplicationHost.CreateApplicationHost var host = SetupHost(); while (_running) { var context = listener.GetContext(); using (var writer = new StreamWriter(context.Response.OutputStream)) { // process ASP script and send response back to client host.ProcessRequest(GetPage(context), GetQuery(context), writer); } context.Response.Close(); } } So far all this works fine as long as I just use GET parameters. But when it comes to receiving POST data in my ASPX script I run into trouble. For testing I use the following script: // GET parameters are working: var getTest = Request.QueryString["getTest"]; Response.Write("getTest: " + getTest); // prints "getTest: 321" // don't know how to access POST parameters: var postTest1 = Request.Form["postTest"]; // Request.Form is empty?! Response.Write("postTest1: " + postTest1); // so this prints "postTest1: " var postTest2 = Request.Params["postTest"]; // Request.Params is empty?! Response.Write("postTest2: " + postTest2); // so this prints "postTest2: " It seems that the System.Web.HttpRequest object I'm dealing with in ASP does not contain any information about my POST parameter "postTest". I inspected it in debug mode and none of the members did contain neither the parameter-name "postTest" nor the parameter-value "654". I also tried the BinaryRead method of Request, but unfortunately it is empty. This corresponds to Request.InputStream==null and Request.ContentLength==0. And to make things really confusing the Request.HttpMethod member is set to "GET"?! To isolate the problem I tested the code by using a PHP script instead of the ASPX script. This is very simple: print_r($_GET); // prints all GET variables print_r($_POST); // prints all POST variables And the result is: Array ( [getTest] = 321 ) Array ( [postTest] = 654 ) So with the PHP script it works, I can access the POST data. Why does the ASPX script don't? What am I doing wrong? Is there a special accessor or method in the Response object? Can anyone give a hint or even know how to solve this? Thanks in advance.

    Read the article

  • Search like google

    - by Rajanikant
    I have a task to make a search module in which i have database users and tablename userProfile and i want to search profile when i entered text in text box for ex. if i entered "I am looking for MBA in delhi" or 'mba information in delhi' it will displayed all user registered expertise as mba and city in delhi . this will be like job portal or any social networking portal my database is -- phpMyAdmin SQL Dump -- version 2.8.1 -- http://www.phpmyadmin.net -- Host: localhost -- Generation Time: May 01, 2010 at 10:58 AM -- Server version: 5.0.21 -- PHP Version: 5.1.4 -- Database: users -- -- Table structure for table userProfile CREATE TABLE userprofile ( id int(11) NOT NULL auto_increment, name varchar(50) collate latin1_general_ci NOT NULL, expertise varchar(50) collate latin1_general_ci NOT NULL, city varchar(50) collate latin1_general_ci NOT NULL, state varchar(50) collate latin1_general_ci NOT NULL, discription varchar(500) collate latin1_general_ci NOT NULL, PRIMARY KEY (id) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci AUTO_INCREMENT=3 ; -- -- Dumping data for table userProfile INSERT INTO userProfile VALUES (1, 'a', 'MBA HR', 'Delhi', 'Delhi', 'Fortune is top management college in Delhi, Best B-schools in India providing business studies and management training. FIIB is Delhi based most ranked ...'); INSERT INTO userProfile VALUES (2, 'b', 'MBA marketing', 'Delhi', 'Delhi', 'Fortune is top management college in Delhi, Best B-schools in India providing business studies and management training. FIIB is Delhi based most ranked ...'); and search.php page <?php include("config.php"); include("class.search.php"); $br=new search(); if($_POST['searchbutton']) { $str=$_POST['textfield']; $brstr=$br->breakkey($str); } ?> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Untitled Document</title> </head> <body> <table width="100%" border="0"> <form name="frmsearch" method="post"> <tr> <td width="367">&nbsp;</td> <td width="300"><label> <input name="textfield" type="text" id="textfield" size="50" /> </label></td> <td width="294"><label> <input type="submit" name="searchbutton" id="button" value="Search" /> </label></td> </tr></form> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> </table> </body> </html> and config.php is <?php error_reporting(E_ALL); $host="localhost"; $username="root"; $password=""; $dbname="users"; $con=mysql_connect($host,$username,$password) or die("could not connect database"); $db=mysql_select_db($dbname,$con) or die("could not select database"); ?> and class.search.php is <?php class search { function breakkey($key) { global $db; $words=explode(' ',$key); return $words; } function searchitem($perm) { global $db; foreach($perm as $k=>$v) { $sql="select * from users" } } } ?>

    Read the article

  • XSL-FO: Force Wrap on Table Entries

    - by Ace
    I'm having an issue where when I publish my modspecs to pdf (XSL-FO). My tables are having issues, where the content of a cell will overflow its column into the next one. How do I force a break on the text so that a new line is created instead? I can't manually insert zero-space characters since the table entries are programmatically entered. I'm looking for a simple solution that I can just simply add to docbook_pdf.xsl (either as a xsl:param or xsl:attribute) EDIT: Here is where I'm at currently: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0" xmlns:fo="http://www.w3.org/1999/XSL/Format"> <xsl:import href="urn:docbkx:stylesheet"/> ...(the beginning of my stylesheet for pdf generation, e.g. header and footer content stuff) <xsl:template match="text()"> <xsl:call-template name="intersperse-with-zero-spaces"> <xsl:with-param name="str" select="."/> </xsl:call-template> </xsl:template> <xsl:template name="intersperse-with-zero-spaces"> <xsl:param name="str"/> <xsl:variable name="spacechars"> &#x9;&#xA; &#x2000;&#x2001;&#x2002;&#x2003;&#x2004;&#x2005; &#x2006;&#x2007;&#x2008;&#x2009;&#x200A;&#x200B; </xsl:variable> <xsl:if test="string-length($str) &gt; 0"> <xsl:variable name="c1" select="substring($str, 1, 1)"/> <xsl:variable name="c2" select="substring($str, 2, 1)"/> <xsl:value-of select="$c1"/> <xsl:if test="$c2 != '' and not(contains($spacechars, $c1) or contains($spacechars, $c2))"> <xsl:text>&#x200B;</xsl:text> </xsl:if> <xsl:call-template name="intersperse-with-zero-spaces"> <xsl:with-param name="str" select="substring($str, 2)"/> </xsl:call-template> </xsl:if> </xsl:template> </xsl:stylesheet> With this, the long words are successfully broken up in the table cells! Unfortunately, the side effect is that normal text elsewhere (like in a under sextion X) now breaks up words so that they appear on seperate lines. Is there a way to isolate the above process to just tables? EDIT #2 here is what the fo spits out for a single table... <fo:table-row><fo:table-cell padding-start="2pt" padding-end="2pt" padding-top="2pt" ... </fo:block></fo:table-cell></fo:table-row>

    Read the article

  • How to load and pass a Xforms form in Orbeon (How to Send instance to XForms) ?

    - by Clem
    Hi, I am using the Orbeon Forms solution to generate messages from filled-in web forms. I read different code snippetse in Orbeon's wiki on XForms submission from a pipeline, and I tried different solutions but it doesn't work, and there is no example with a POST from a pipeline, caught by a PFC and sent to an XForms view that receives the posted data (all examples are done in the same page). I have the following pipeline which is received on his instance input: pipelineWrite.xpl <p:config ...> <p:param name="instance" type="input"/> <!-- instance containing the data of the form filled by user --> <p:param name="data" type="output"/> <p:processor name="oxf:java"> <!-- transforms the data into a file --> <p:input name="config"> <config sourcepath="." class="ProcessorWriteCUSDECCD001B"/> </p:input> <p:input name="input" href="#instance"/> <p:output name="output" id="file"/> <!-- XML containing the url of the file --> </p:processor> <p:processor name="oxf:xforms-submission"> <!-- post the XML to the success view --> <p:input name="submission"> <xforms:submission method="post" action="/CUSDECCD001B/success" /> </p:input> <p:input name="request" href="#file"/> <p:output name="response" ref="data"/> </p:processor> </p:config> Then there is the PFC which catch the actions : page-flow.xml <config xmlns="http://www.orbeon.com/oxf/controller"> <page path-info="/CUSDECCD001B/" view="View/ViewForm.xhtml"/> <!-- load the form to be filled in by user --> <page path-info="/CUSDECCD001B/write" model="Controller/PipelineWrite.xpl"/> <!-- send the instance of the form filled to the pipeline above --> <page path-info="/CUSDECCD001B/success" view="View/ViewSuccess.xhtml"/> <!-- send the instance containing the url of the file to the success view --> <epilogue url="oxf:/config/epilogue.xpl"/> </config> Then there is the success view, which is very simple : ViewSuccess.xhtml <html ... > <head> <title>Generation OK</title> <xforms:model> <xforms:instance id="FILE" src="input:instance"> <files xmlns=""> <file mediaType="" filename="" size="" /> </files> </xforms:instance> </xforms:model> </head> <body> Click here to download : <xforms:output ref="//file" appearance="xxforms:download"> <xforms:filename ref="@filename"/> <xforms:mediatype ref="@mediatype"/> <xforms:label>Download</xforms:label> </xforms:output> </body> </html> The problem is that the post is done well, the PFC catches the action well, load the correct view, but the view is loaded with no data (the view doesn't find the data on his instance input). I tried with a GET in the view to retrieve the POST data, and that's the same thing. No data is retrieved. So the download button doesn't work. I hope I'm clear enough to find a solution. Thanks in advance.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >