Search Results

Search found 11629 results on 466 pages for 'cloud solutions'.

Page 206/466 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • SSD Tweaks for Ubuntu 12.04

    - by Mustafa Erdinç
    I need to tweak my Dell XPS 13z SSD for maximum performance and life cycle than I read the solutions explained here, but it is for 11.10 and my fstab is different. For now my fstab is looks like this: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=abf5ce9e-bdb7-4b2f-a7bd-bbd9efa72a98 / ext4 errors=remount-ro 0 1 # /home was on /dev/sda2 during installation UUID=491427b2-7482-4483-b6eb-7c564b991aff /home ext4 defaults 0 2 # swap was on /dev/sda3 during installation #UUID=7551000d-e708-4e0f-9fd2-9f93119f63fb none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 tmpfs /tmp tmpfs mode=1777 And my rc.local is looks like this: echo noop > /sys/block/sda/queue/scheduler echo deadline > /sys/block/sda/queue/scheduler echo 1 > /sys/block/sda/queue/iosched/fifo_batch exit 0 Do you have any suggestions, what should I do? Regards

    Read the article

  • Magento Bulk Product Import + Modules Nightmare

    - by mike
    Have 5,000 products in CSV file File has been re-saved as UT8 file in google documents and exported to CSV from excel File loads perfectly with all fields in demo of magento store manager (except we dont want to buy it:) When trying to upload in regular Magento..keep getting error messages on column duplicates....yes we have hundreds of duplicates as the titles of products in fields correspond with different sizes, etc...no way around it Any solutions around this or any open source software similar to store manager that can do the trick. Ready to give up and go to paid solution such as Big Commerce Also, Uploaded a bunch of free modules/keys...of open source bulk import products from magentocommerce but I cant find them anywhere in the main admin panel to use...there is no menu item for them anywhere??

    Read the article

  • TELERIK LAUNCHES NEW AUTOMATED TESTING TOOLS PRODUCT LINE

    TELERIK LAUNCHES NEW AUTOMATED TESTING TOOLS PRODUCT LINE Merger with ArtOfTest repositions Telerik as a major player in the automated testing market Waltham, MA, April 13, 2010 Telerik, a leading provider of development tools and solutions for the Microsoft? .NET platform, today announced the launch of WebUI Test Studio 2010, an innovative and easy-to-use automated web-testing solution. Encompassing essential web technologies such as ASP.NET AJAX, Silverlight, and MVC, Teleriks WebUI Test Studio...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Incremental file system backups

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Dual Inspection / Four Eyes Principle

    - by Ralf
    I have the requirement to implement some kind of dual inspection or four-eyes principle as a feature of my software, meaning that every change of an object done by user A has to be checked by user B. A trivial example would be a publishing system where an author writes an article and another has to proofread it before it is published. I am a little bit surprised that you find nearly nothing about it on the net. No patterns, no libraries (besides cibet), no workflow solutions etc. Is this requirement really so uncommon? Or am I searching for the wrong terms? I am not looking for a specific solution. More for a pattern or best practice approach. Update: the above example is really trivial. Let's add some more complexity to it. The article has been published, but it now needs an update. Putting the article offline for the update is not an option, but the update has to be proof read, too.

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • No Cost 1-Click Remarketer Level Training

    - by martin.morganti(at)oracle.com
    The Remarketer level has proven to be a great success as a way of enabling Remarketers to Jump start a resale business with Oracle. As part of the Knowledge Zone for the 1-Click Products we have some no cost training available - the Oracle 1-Click Technology Products Guided Learning Path - which explains about the program and how to position Oracle products. We have been working to increase the training that is available for Remarketers and I am pleased to let you know that we have recently added more no cost training. The training path that we have released is the Oracle Database 11g 1-Click Technology Sales Guided Learning Path . This set of courses provides more detail on the Oracle 11G Database and will help you to better uncover and exploit opportunities for you to sell Oracle 11G as part of your solutions. So if you are interested in a No Fees, No Barriers No Excuses way to resell Oracle 1-Click products look at the Remarketer page and take the free 1-Click Guided Learning paths in the Training Section to kick start your activity.

    Read the article

  • Oracle CRM On Demand Release 24 is Generally Available

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We are pleased to announce that Oracle CRM On Demand Release 24 is Generally Available as of October 25, 2013 Get smarter, more productive and the best value with Oracle CRM On Demand Release 24. Oracle CRM On Demand continues to be the most complete Software-as-a-Service (SaaS) CRM solution available. Now, with Release 24, organizations of all types and sizes benefit from actionable insight anywhere, anytime, as well as key enhancements in mobility, embedded social, analytics, integration and extensibility, and ease of use.Next Generation Mobile and Desktop Solutions : Oracle CRM On Demand Release 24 offers a complete set of mobile and desktop solutions that improve productivity by enabling reps to access and update information anywhere, anytime. Capabilities include: Oracle CRM On Demand Disconnected Mobile Sales (DMS) – A disconnected native iPad solution, DMS has been further streamlined mobile sales process by adding Structured Product Messaging to record brand specific call objectives, enhancements in HTML5 eDetailing including message response tracking and improvements in administration and configuration such as more field management options for read only fields, role management and enhanced logging. Oracle CRM On Demand Connected Mobile Sales. This add-on mobile service provides a configurable mobile solution on iOS, BlackBerry and now Android devices. You can access data from CRM On Demand in real time with a rich, native user experience, that is comfortable and familiar to current iOS, BlackBerry and Android users. New features also include Single Sign On to enhance security for mobile users.  Oracle CRM On Demand Desktop: This application centralizes essential CRM information in the familiar Microsoft Outlook environment,increasing user adoption and decreasing training costs. Users can manage CRM data while disconnected, then synchronize bi-directionally when they are back on the network. New in Oracle CRM On Demand Desktop Version 3 is the ability to synchronize by Books of Business, and improved Online Lookup. Mobile Browser Support: The following mobile device browsers are now supported: Apple iPhone, Apple iPad, Windows 8 Tablets, and Google Android. Leverage the Social Enterprise Engaging customers via social channels is rapidly becoming a significant key to enhanced customer experience as it provides proactive customer service, targeted messaging and greater intimacy throughout the entire customer lifecycle. Listening to customers on the social channels can identify a customers’ sphere of influence and the real value they bring to their organization, or the impact they can have on the opportunity. Servicing the customer’s need is the first step towards loyalty to a brand, integrating with social channels allows us to maximize brand affinity and virally expand customer engagements thus increasing revenue. Oracle CRM On Demand is leveraging the Social Enterprise through its integration with Oracle’s Social Relationship Management (SRM) product suite by providing out-of-the-box integration with Social Engagement and Monitoring (SEM), Social Marketing (SM) and Oracle Social Network (OSN). With Oracle CRM On Demand Release 24, users are able to create a service request from a social post via SEM and have leads entered on a SM lead form automatically entered into Oracle CRM On Demand along with the campaign, streamlining the lead qualification process. Get Smarter with Actionable Insight The difference between making good decisions and great decisions depends heavily upon the quality, structure, and availability of information at hand. Oracle CRM On Demand Release 24 expands upon its industry-leading analytics capabilities to provide greater business insight than ever before. New capabilities include flexible permissions on analytics reports folders, allowing for read only access to reports, and additional field and object coverage. Get More Productive with Powerful Tools Oracle CRM On Demand Release 24 introduces a new set of powerful capabilities designed to maximize productivity. A significant new feature for customizing Oracle CRM On Demand is a JavaScript API. The JS API allows customers to add new buttons, suppress existing buttons and even change what happens when a user clicks an existing button. Other usability enhancements, such as personalized related information applets, extended case insensitive search provide users with better, more intuitive, experience. Additional privileges for viewing private activities and notes allow administrators to reassign records as needed, and Custom Object management. Workflow has been added to the Order Item object; and now tasks can be assigned to a relative user, such as an Account Owner, allowing more complex business processes to be automated and adhered to. Get the Best Value Oracle CRM On Demand delivers unprecedented value with the broadest set of capabilities from a single-provider solution, the industry’s lowest total cost of ownership, the most on-demand deployment options, the deepest CRM expertise and experience of any CRM provider, and the most secure CRM in the cloud. With Release 24, Oracle CRM On Demand now includes even more enterprise-grade security, integration, and extensibility features, along with enhanced industry editions to save you time and money. New features include: Business Process Administration: A new privilege has been added that allows administrators to override a Business Process Administration rule.This privilege permits users to edit a locked record, or unlock a record, in the event of a material change that needs to be reflected per corporatepolicy. Additionally, the Products Detailed object has been added to Business Process Administration, enabling record locking and logic to be applied. Expanded Integration: Oracle continues to improve Web Services each release, by adding more object coverage enabling customers and partners to easily integrate with CRM On Demand. Bottom Line Oracle CRM On Demand Release 24 enables organizations to get smarter, get more productive, and get the best value, period. For more information on Oracle CRM On Demand Release 24, please visit oracle.com/crmondemand

    Read the article

  • No OpenCL devices (13.10 Core I5 4430 Intel Graphics HD)

    - by Itai Bar-Haim
    I've been looking quite a lot for this but couldn't find anything that worked. I have an Intel Core I5-4430 based system with no extra graphics adapter (so it's using the on-board, integrated Intel Graphics HD that is part of the CPU), running Ubuntu 13.10. When running BOINC World Community Grid it says "No usable GPUs". When running a bitcoin mining program it says "No OpenCL devices". I searched the web, found two possible solutions - one was to use the Intel OpenCL driver for Xeon platforms, the other was to use the AMD driver. Tried both. I failed installing the Intel driver as there were too many prerequisites that I just didn't manage to install, and the AMD installation was quite fast to its size (it's 200MB, and took far less than a minute to install), but it didn't solve the problem. Perhaps I'm looking in the wrong direction here, I'm not sure, but is there anyway I can utilize the advance features of my CPU for those distributed computation programs?

    Read the article

  • The First Microsoft Dynamics NAV Builds on TFS 2010 Server

    - by ssmantha
    We are now successfully, able build Dynamics NAV solutions using the TFS Build workflow mechanisms. Lots of test builds were made, the builds can restore the NAV Database and start from a fresh solution, take latest of the NAV objects and then import it to Navision and call the compile method. The workflow is also able to generate FOB files as output which can be directly shipped to the customers. I think this is the First in the world implementation of the TFS build concepts in conjunction with NAV. I think this is a time to change the thinking caps and try to approach ERP development and include the practises of ALM into ERP Product Development.

    Read the article

  • Problems in exporting terrain from autodesk 3ds

    - by Jatin Kumar
    i am trying to make small counter strike sort of game and for the terrain part i have exported the terrain in 3ds format from Autodesk 3ds-max and imported the same in opengl using lib3ds. Its working fine but with few problems: The terrain is mainly made up of some cubical boxes with texture on them and placed on a big flat surface with boundary wall. In opengl i have enabled anti aliasing but still there is too much aliasing on the boundaries (visible when rotating the camera). I have tiled the floor with some image but in opengl it is just the single image stretched over the complete surface. I have exported animated model (Skelton+mesh+material+animation) from 3ds and used cal3d library for reading the same. Model has a gun also which is not appearing in opengl and it too has too much of aliasing problem. I have googled around but couldn't find any relevant solutions. Thanks in advance

    Read the article

  • Think Global, Act Regional with Identity Globe Trotters

    - by Tanu Sood
    This month we will be introducing a new section on our blog. Titled “Identity Globe Trotters”, this will be a monthly series that would feature a regional topic the last Friday of every month. We would invite guest contributors from different regions to highlight a region-specific business issue, solution, highlight a customer implementation or a regional discussion of interest. If you have an Identity management topic in mind that you’d like featured in this section, do let us know. We look forward to engaging in meaningful discussions with you on global perspectives, regional solutions.

    Read the article

  • Talend lance la première plate-forme unifiée de gestion de données : ce qui va changer avec Talend 4

    TALEND LANCE LA PREMIÈRE PLATE-FORME UNIFIÉE DE GESTION DE DONNÉES DU MARCHÉ Talend, leader de l'intégration de données open source avec plus de 1.000 clients payants (dont eBay, Virgin Mobile, Sony Online Entertainment, Allianz, etc.), lance la première plate-forme unifiée de gestion de données. Avec plus de 7 millions de téléchargements, les solutions de gestion de données de Talend sont les plus utilisées et les plus déployées au monde. La société est présente en Amérique du Nord, en Europe et en Asie, et s'appuie sur un réseau mondial de partenaires. Cette nouvelle devrait donc avoir un fort impact mondial. Talend 4.0 unifie l'intégration de données, la qualité des données et le Master Data Management pour améli...

    Read the article

  • Le gestionnaire d'accès de Sun repris par des anciens de la société : OpenSSO devient OpenAM grâce à

    Le gestionnaire d'accès de Sun repris par des anciens de la société OpenSSO devient OpenAM sous l'égide de Simon Phipps, nouvel employé de ForgeRock Dans la famille des technologies de Sun dont on se demande ce qu'elles vont devenir avec leur rachat par Oracle, voici OpenSSO. OpenSSO est un gestionnaire d'accès à des services web, open source, fondé sur un mécanisme de single sign-on qui fournit « des services d'identité essentiels pour simplifier, de manière transparente, l'exécution de la connexion unique ». Sous l'égide d'Oracle, cette technologie était semble-t-il sur une voie de garage. Le géant du logiciel possédait déjà ses propres solutions avant même le rach...

    Read the article

  • Open Source Web-based CMS for writing and managing API documentation

    - by netcoder
    This is a question that have somewhat been asked before (i.e.: How to manage an open source project's documentation). However, my question is a little different because: We're not developing open source software, but proprietary software The documentation has to be hand-written, because we do not want to publish the actual software API documentation, but only the public API documentation I do want developers and project managers to write the documentation collaboratively Obviously, wikis are a solution, but they're very generic. I'm looking for a more specialized tool for this job. I've looked around and found a few like Adobe Robohelp, SaaS solutions and such, but I'd like to know if any open source software exists for that purpose. Do you know any Open Source Web-based CMS for writing and managing API and software documentation?

    Read the article

  • JavaOne - Java SE Embedded Booth - Digi - Home Health Hub (HHH)

    - by David Clack
    Hi All,  So another exciting platform we will have in the booth at JavaOne is the Digi  Home Health Hub (HHH) platform. http://www.digi.com/products/wireless-wired-embedded-solutions/single-board-computers/idigi-telehealth-application-kit#overview This is a Freescale reference design that has been built by Digi, the system is powered by a Freescale i.MX28 ARM SOC, what's really exciting me is it has every wireless protocol you could ever want on a single motherboard. Ethernet, 802.11b/g/n Wi-Fi, Bluetooth, ZigBee, configurable Sub-GHz radio, NFC plus USB, audio and LCD/touch screen option. I've been experimenting with lots of wireless capable healthcare products in the last few months, plus some Bluetooth Pulse / Oxy meters, we have been looking at how the actual healthcare wireless protocols work. Steve Popovich - Vice President, Digi Internationalwill be doing a talk at the Java Embedded @ JavaOne conference in the Hotel Nikko, right next door to the JavaOne show in the Hilton. If you are registered at JavaOne you can come over to the Java Embedded @ JavaOne for $100 Come see us in booth 5605 See you there Dave

    Read the article

  • Game engine IDE template [on hold]

    - by Spencer Killen
    Hey so I'm working on a fairly basic javascript game, and it's beginning to get to the point where my 'engine' to which I wrote, is difficult to manage in an all text environment, Iv already thought of using a javascript IDE like jet brains, but i was wondering if I could go 1 step further and have use a piece of software to purpose as an IDE and have a customizable GUI that I could use to automate class construction and such, for example, I have it set up right now so that everytime I want to create a new block (it's a platformer) I must copy a text file and fill in all the setting such as bounding box, sprite ect, it would be a lot easier if I could press a button and have a menu apear where I would fill in these values (I have a game maker background) is there software like this? If not what are some similar solutions to my problem?

    Read the article

  • Wine pollutes "Open With" application list

    - by Yi Jiang
    The dialog box in question here is the one you get with the context menu option "open with other applications". Wine seems to have inserted more than a dozen or so entries for each application I install, which makes it a pain to find the correct application: What can I do to remove the duplicates? Update: Neither of the two solutions really work. The bug is interesting, but the symptoms does not match my problem (I'm not having problem with uninstalling applications, but rather the things that are inserted after installing them), and with the other one, all references to the Wine application are removed, which actually makes the problem worse (although it may be an acceptable solution if nothing else can be found). So this is still an open question; any takers?

    Read the article

  • What is the best way to restrict adult content on 12.04 LTS

    - by Stephen Myall
    I bought my kids a PC and installed 12.04 (Unity) on it. The bottom line is, I want my children to use the computer unsupervised while I have confidence they cannot access anything inappropriate. What I have looked at: I was looking at Scrubit which allows me configure my wifi router but this solution would also affect my other PC and mobile devices. This is not feasible as I just want the solution to work on one PC. I also did some Google searches and came across Nanny (it seems to look the part). My experience of OSS is that the best solutions frequently never appear first on a Google search list so my question is very specific. I want to leverage your knowledge and experience to understand “What is the best way to restrict adult content on 12.04 LTS” as this is important to me. Please don't answer this question "try this or that", then give me some PPA, I am looking for knowledge and experience from someone in my situation. Thanks in advance

    Read the article

  • nvidia GT 525m no screen found

    - by Pavan
    I'm a Dell XPS user with nvidia GT 525M with Optimus. When I installed nvidia I was greeted with blank screen. And the error in nvidia log was "Fatal error : No screen found". So after searching for the solutions I figured that I have to insert "BusID" in xorg.conf file. After rebooting I was again greeted with blank screen with error "Fatal error : screen already occupied". I don't what exactly the problem is. Do I need to blacklist Intel inbuilt graphics to nvidia in my machine or anything else needs to be done? Please guide me.

    Read the article

  • INVITATION: EMEA MASTER DATA MANAGEMENT (MDM) PARTNER SUMMIT, 5th December 2011

    - by mseika
    Oracle is pleased to invite you to the EMEA Master Data Management Partner Summit in Portugal on 5th December 2011. Partners such as you have been a key contributor to growth for Oracle’s MDM. And to empower your further growth; Oracle has formed a dedicated MDM Specialization Program to help you further develop your organization’s readiness in selling and delivering Oracle’s Master Data Management solutions that best suit your go-to-market plans and initiatives. For more information about the MDM Partner Summit including the detailed agenda please click here (Login required). Register Now The MDM Partner Summit will be followed by a 4 day MDM Partner Hands On training starting from Dec 6th to 9th with an arrival on Dec 5th. Please feel free to register your company sales and technical employees. See here for more details such as training agenda and registration.

    Read the article

  • Invitation: EMEA Master Data Management (MDM) Partner Summit, 5th December 2011

    - by swalker
    Oracle is pleased to invite you to the EMEA Master Data Management Partner Summit in Portugal on 5th December 2011. Partners such as you have been a key contributor to growth for Oracle’s MDM. And to empower your further growth; Oracle has formed a dedicated MDM Specialization Program to help you further develop your organization’s readiness in selling and delivering Oracle’s Master Data Management solutions that best suit your go-to-market plans and initiatives. For more information about the MDM Partner Summit including the detailed agenda please click here (Login required). Register Now The MDM Partner Summit will be followed by a 4 day MDM Partner Hands On training starting from Dec 6th to 9th with an arrival on Dec 5th. Please feel free to register your company sales and technical employees. See here for more details such as training agenda and registration. Click here to see the full invitation.

    Read the article

  • Ubuntu 11.10 and Atheros AR8131 ethernet card

    - by nivcaner
    I have an Atheros AR8131 Ethernet card on a Lenovo b560 laptop. Sometime in the past, probably at some upgrade or other my wired connection stopped functioning. I know the card is ok because it works with windows. I tried upgrading to Ubuntu 11.10 hoping the problem would go away. It didn't... Tried to google the problem, but all of the solutions I found didn't work for me. This is probably a driver problem. Any help will be appreciated. Please note that I'm a linux newbie so I don't really know what logs to post... Niv

    Read the article

  • Create speed baseline for local web file

    - by Michael Jasper
    Is there any tool or method that will load a localhost page a number of times, and return the averaged data for load times, onload events, Dom ready events, etc? I'd like to work on page speed optimization, but need a baseline before I begin. I have used both Google analytics and Webmaster tools, but I'd like an automated solutions that runs locally. My ideal solution would be a program or script that would take the path/file, number of iterations, and then take several minutes to load the page n times without cache and crunch numbers to create a baseline.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >