Search Results

Search found 21861 results on 875 pages for 'external library'.

Page 471/875 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • News about Oracle Documaker Enterprise Edition

    - by Susanne Hale
    Updates come from the Documaker front on two counts: Oracle Documaker Awarded XCelent Award for Best Functionality Celent has published a NEW report entitled Document Automation Solution Vendors for Insurers 2011. In the evaluation, Oracle received the XCelent award for Functionality, which recognizes solutions as the leader in this category of the evaluation. According to Celent, “Insurers need to address issues related to the creation and handling of all sorts of documents. Key issues in document creation are complexity and volume. Today, most document automation vendors provide an array of features to cope with the complexity and volume of documents insurers need to generate.” The report ranks ten solution providers on Technology, Functionality, Market Penetration, and Services. Each profile provides detailed information about the vendor and its document automation system, the professional services and support staff it offers, product features, insurance customers and reference feedback, its technology, implementation process, and pricing.  A summary of the report is available at Celent’s web site. Documaker User Group in Wisconsin Holds First Meeting Oracle Documaker users in Wisconsin made the first Documaker User Group meeting a great success, with representation from eight companies. On April 19, over 25 attendees got together to share information, best practices, experiences and concepts related to Documaker and enterprise document automation; they were also able to share feedback with Documaker product management. One insurer shared how they publish and deliver documents to both internal and external customers as quickly and cost effectively as possible, since providing point of sale documents to the sales force in real time is crucial to obtaining and maintaining the book of business. They outlined best practices that ensure consistent development and testing strategies processes are in place to maximize performance and reliability. And, they gave an overview of the supporting applications they developed to monitor and improve performance as well as monitor and track each transaction. Wisconsin User Group meeting photos are posted on the Oracle Insurance Facebook page http://www.facebook.com/OracleInsurance. The Wisconsin User Group will meet again on October 26. If you and other Documaker customers in your area are interested in setting up a user group in your area, please contact Susanne Hale ([email protected]), (703) 927-0863.

    Read the article

  • Does it matter to you that a software is "available source" but not "open source"

    - by ccpod
    You probably know the list of open source licenses officially approved by the OSI. Most notably I guess would be the GPL, MIT, [insert your favorite license here]. I recently ran into a project which although was open source (the creator made all source code available), was not officially open source under one of those official licenses. It released the source, but made no promise to release the source in the future. It allowed modification suggestions, but made no promises to accept patches and disallowed external distribution of externally-patched versions. It allowed the use of the software in commercial or paid projects, but disallowed the sale of the software itself. I suppose it could be called "available source" not open source as we like to think of it. I can see why the management team of a company wouldn't want to do business with this software. They can't fork it, they can't sell it, they can't create their own version of the software and distribute it or sell it. But would it matter to you as part of a software engineering team who's just using this software? I can still get my work done with it, I can use it in a project for which I'm paid (but I can't sell the software itself, which I'm not in the business of doing anyway), and I can make changes to the code to make it behave differently for my needs (but I can't make those modifications public), and if I do want those modifications officially made available to others, the approval is up to the project itself and they choose whether to incorporate them in an official release or not. So we know that a company that wants to base its business on this "available source" software can't do that, but as someone from the software engineering team, would those differences matter to you or do they seem less relevant? Curious what others think of this.

    Read the article

  • Machine Check Exception

    - by Karl Entwistle
    When trying to install ubuntu-12.04-desktop-amd64.iso from USB I get one of the following errors http://en.wikipedia.org/wiki/Machine_Check_Exception states the error can occur due to -poorly fitted heatsink/computer fans (the same problem can happen with excessive dust in the CPU fan) -an overloaded internal or external power supply (fixable by upgrading) So I tried the following -Using rubbing alcohol to remove all the thermal paste from the CPU and heatsink, I then reseated the CPU after checking all the pins on the MOBO, everything seems fine. -Boot without the GPU to see if was the PSU that is being over stressed. -Removing all RAM apart from one stick and running a Memtest86 which it passed -Using Ubuntu 10.04.4 Desktop 64 bit (Different USB slots and USB sticks) -Using Ubuntu 12.04 Desktop 64 bit (Different USB slots and USB sticks) -Reset the BIOS using the Clear CMOS jumper -Removing all HD power cables and SATA cables -Updating the BIOS from F2 to F6 My PC is using the following parts. -Gigabyte GA-Z77-DS3H (F6 BIOS) -Intel Core i7 3770K 3.5GHz Socket 1155 -G-Skill 8GB (2x4GB) DDR3 1600Mhz RipjawsX Memory Kit CL9 (9-9-9-24) 1.5V -Be Quiet Shadow Rock Pro -Be Quiet Pure Power 730W Modular PSU -Sapphire HD 6870 1GB GDDR5 DVI HDMI DisplayPort PCI-E Graphics Card Any ideas?

    Read the article

  • Asus Sonic Master on Asus N53SV

    - by David Winchester
    I have read that there's a problem to get the subwoofer working in these laptops. I tried this solution No sound from external subwoofer but I don't how to prove that the subwoofer is properly functioning. I use Pulseaudio equalizer and the bass sound seems to work fine, but when I go to the Sound Settings, I can't move the bar where it says 'Subwoofer' in my sound card option, so I don't know if everything is alright. If someone has a solution I would like to know, because there isn't much information regarding this. By the way, I'm using Ubuntu 12.04 64 bits. Thanks beforehand, Dave EDIT ----------- Possible Solution Well, I will post a solution that worked for me and I think it will help a lot of users. I finally got the subwoofer working. Besides adding in /etc/modprobe.d/alsa-base.conf the line options snd-hda-intel model=asus-mode4 I deleted the lines with load-module module-combine and module-combine-sink in /etc/pulse/default.pa (in the home folder there's also a ~/.pulse/default.pa file, I don't know if it has the lines too) To assure all the channels are working, I think this command tells me that speaker-test -c6 -l1 -twav I use pulseaudio-equalizer and the bass sounds very well when properly adjusted. Also, all the channels seems to work fine and the sound is even better than in Windows (where I don't have an equalizer). I pointed out before a module-combine and module-combine-sink problem, because one day I turned on my laptop and pulseaudio didn't work. So I deleted the lines with that names (don't know if they came by default, maybe I added them sometime when I was trying to fix my speakers). After all this, I can now move the Subwoofer bar in the Sound configuration. Anyways, the Equalizer does a great job and it improves the sound a lot.

    Read the article

  • Cannot apply unity --reset after modifying files

    - by Alex Cline
    So I have an idea of what I did wrong, I am just not sure how to fix it. I used the Unity Glass mod: http://www.omgubuntu.co.uk/2012/07/unity-glass-offers-refined-new-look-for-the-unity-launcher After removing it, I cannot reset unity and it does not work. Even after purging Unity and reinstalling it, I cannot seem to replace the missing files. $unity --reset WARNING: Unity currently default profile, so switching to metacity while resetting the values unity-panel-service: no process found Checking if settings need to be migrated ...no Checking if internal files need to be migrated ...no Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done compiz (core) - Warn: failed to receive ConfigureNotify event on 0x1c00027 Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/arcline/.compiz/session/10b624e5c8f98c5325134625607758338300000051770001" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done (compiz:7038): Gtk-WARNING **: Theme parsing error: gnome-panel.css:28:11: Not using units is deprecated. Assuming 'px'. (compiz:7038): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Segmentation fault (core dumped)

    Read the article

  • Amit Jasuja's Session at Gartner IAM with Ranjan Jain of Cisco

    - by Naresh Persaud
    If you did not get a chance to attend Amit Jasuja's session at Gartner IAM this week in Las Vegas, here is a summary of the session and a copy of the slides. The agenda featured an introduction by Ray Wagner, Managing VP at Gartner, followed by Amit discussing the trends in Identity and Access Management shaping Oracle's strategy. Today we are seeing the largest re-architecture in a decade. Every business from manufacturing to retail is transforming the way they do business. Manufacturing companies are becoming manufacturing services companies. Retail organizations are embracing social retail. Healthcare is being delivered on-line around the clock. Identity Management is at the center of the transformation. Whether you are Toyota embracing a social network for cars or launching the next Iphone, the Identity of the user provides context to enable the interaction and secure the experience. All of these require greater attention to the context of the user and externalizing applications for customers and employees.  Ranjan discussed how Cisco is transforming  by integrating 1800 applications to a single access management framework and consolidating 3M users across 4 data centers to support internal and external processes. David Lee demonstrated how to use Oracle Access Manager 11g R2 on a mobile application to sign-on across multiple applications while connecting mobile applications to a single access control policy.

    Read the article

  • Keep basic game physics separate from basic game object? [on hold]

    - by metamorphosis
    If anybody has dealt with a similar situation I'd be interested in your experience/wisdom, I'm developing a 2D game library in C++, I have game objects which have very basic physics, they also have movement classes attached to differing states, for example, a different movement type based on whether the character is jumping, on ice, whatever. In terms of storing velocity and acceleration impulses, are they best held by the object? Or by the associated movement class? The reason I ask is that I can see advantages to both approaches- if you store physics data in the movement class, you have to pass physics information between class instances when a state change occurs (ie. impulses, gravity etc) but the class has total control over whether those physics are updated or not. An obvious example of how this would be useful was if an object was affected by something which caused it to ignore gravity, or something like that. on the other hand if you store the physics data in the object class, it feels more logical, you don't have to go around passing physics impulses and gravity etc, however the control that the movement class has over the object's physics becomes more convoluted. Basically the difference is between: object->physics stacks (acceleration impulses etc) ->physics functions ->movement type <-movement type makes physics function calls through object and object->movement type->physics stacks ->physics functions ->object forwards external physics calls onto movement type ->object transfers physics stacks between movement types when state change occurs Are there best practices here?

    Read the article

  • Native packaging for JavaFX

    - by igor
    JavaFX 2.2 adds new packaging option for JavaFX applications, allowing you to package your application as a "native bundle". This gives your users a way to install and run your application without any external dependencies on a system JRE or FX SDK. I'd like to give you an overview of what is it, motivation behind it, and finally explain how to get started with it. Screenshots may give you some idea of user experience but first hand experience is always the best. Before we go into all of the boring details, here are few different flavors of Ensemble for you to try: exe, msi, dmg, rpm installers and zip of linux bundle for non-rpm aware systems. Alternatively, check out native packages for JFXtras 2. Whats wrong with existing deployment options? JavaFX 2 applications are easy to distribute as a standalone application or as an application deployed on the web (embedded in the web page or as link to launch application from the webpage). JavaFX packaging tools, such as ant tasks and javafxpackager utility, simplify the creation of deployment packages even further. Why add new deployment options? JavaFX applications have implicit dependency on the availability of Java and JavaFX runtimes, and while existing deployment methods provide a means to validate the system requirements are met -- and even guide user to perform required installation/upgrades -- they do not fully address all of the important scenarios. In particular, here are few examples: the user may not have admin permissions to install new system software if the application was certified to run in the specific environment (fixed version of Java and JavaFX) then it may be hard to ensure user has this environment due to an autoupdate of the system version of Java/JavaFX (to ensure they are secure). Potentially, other apps may have a requirement for a different JRE or FX version that your app is incompatible with. your distribution channel may disallow dependencies on external frameworks (e.g. Mac AppStore) What is a "native package" for JavaFX application? In short it is  A Wrapper for your JavaFX application that makes is into a platform-specific application bundle Each Bundle is self-contained and includes your application code and resources (same set as need to launch standalone application from jar) Java and JavaFX runtimes (private copies to be used by this application only) native application launcher  metadata (icons, etc.) No separate installation is needed for Java and JavaFX runtimes Can be distributed as .zip or packaged as platform-specific installer No application changes, the same jar app binaries can be deployed as a native bundle, double-clickable jar, applet, or web start app What is good about it: Easy deployment of your application on fresh systems, without admin permissions when using .zip or a user-level installer No-hassle compatibility.  Your application is using a private copy of Java and JavaFX. The developer (you!) controls when these are updated. Easily package your application for Mac AppStore (or Windows, or...) Process name of running application is named after your application (and not just java.exe)  Easily deploy your application using enterprise deployment tools (e.g. deploy as MSI) Support is built in into JDK 7u6 (that includes JavaFX 2.2) Is it a silver bullet for the deployment that other deployment options will be deprecated? No.  There are no plans to deprecate other deployment options supported by JavaFX, each approach addresses different needs. Deciding whether native packaging is a best way to deploy your application depends on your requirements. A few caveats to consider: "Download and run" user experienceUnlike web deployment, the user experience is not about "launch app from web". It is more of "download, install and run" process, and the user may need to go through additional steps to get application launched - e.g. accepting a browser security dialog or finding and launching the application installer from "downloads" folder. Larger download sizeIn general size of bundled application will be noticeably higher than size of unbundled app as a private copy of the JRE and JavaFX are included.  We're working to reduce the size through compression and customizable "trimming", but it will always be substantially larger than than an app that depends on a "system JRE". Bundle per target platformBundle formats are platform specific. Currently a native bundle can only be produced for the same system you are building on.  That is, if you want to deliver native app bundles on Windows, Linux and Mac you will have to build your project on all three platforms. Application updates are the responsibility of developerWeb deployed Java applications automatically download application updates from the web as soon as they are available. The Java Autoupdate mechanism takes care of updating the Java and JavaFX runtimes to latest secure version several times every year. There is no built in support for this in for bundled applications. It is possible to use 3rd party libraries (like Sparkle on Mac) to add autoupdate support at application level.  In a future version of JavaFX we may include built-in support for autoupdate (add yourself as watcher for RT-22211 if you are interested in this) Getting started with native bundles First, you need to get the latest JDK 7u6 beta build (build 14 or later is recommended). On Windows/Mac/Linux it comes with JavaFX 2.2 SDK as part of JDK installation and contains JavaFX packaging tools, including: bin/javafxpackagerCommand line utility to produce JavaFX packages. lib/ant-javafx.jar Set of ant tasks to produce JavaFX packages (most recommended way to deploy apps) For general information on how to use them refer to the Deploying JavaFX Application guide. Once you know how use these tools to package your JavaFX application for other deployment methods there are only a few minor tweaks necessary to produce native bundles: make sure java is used from JDK7u6 bundle you have installed adjust your PATH settings if needed  if you are using ant tasks add "nativeBundles=all" attribute to fx:deploy task if you are using javafxpackager pass "-native" option to deploy command or if you are using makeall command then it will try build native packages by default result bundles will be in the "bundles" folder next to other deployment artifacts Note that building some types of native packages (e.g. .exe or .msi) may require additional free 3rd party software to be installed and available on PATH. As of JDK 7u6 build 14 you could build following types of packages: Windows bundle image EXE Inno Setup 5 or later is required Result exe will perform user level installation (no admin permissions are required) At least one shortcut will be created (menu or desktop) Application will be launched at the end of install MSI WiX 3.0 or later is required Result MSI will perform user level installation (no admin permissions are required) At least one shortcut will be created (menu or desktop)  MacOS bundle image dmg (drag and drop) installer Linux bundle image rpm rpmbuild is required shortcut will be added to the programs menu If you are using Netbeans for producing the deployment packages then you will need to add custom build step to the build.xml to execute the fx:deploy task with native bundles enabled. Here is what we do for BrickBreaker sample: <target name="-post-jfx-deploy"> <fx:deploy width="${javafx.run.width}" height="${javafx.run.height}" nativeBundles="all" outdir="${basedir}/${dist.dir}" outfile="${application.title}"> <fx:application name="${application.title}" mainClass="${javafx.main.class}"> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="BrickBreaker.jar"/> </fx:resources> <info title="${application.title}" vendor="${application.vendor}"/> </fx:application> </fx:deploy> </target> This is pretty much regular use of fx:deploy task, the only special thing here is nativeBundles="all". Perhaps the easiest way to try building native bundles is to download the latest JavaFX samples bundle and build Ensemble, BrickBreaker or SwingInterop. Please give it a try and share your experience. We need your feedback! BTW, do not hesitate to file bugs and feature requests to JavaFX bug database! Wait! How can i ... This entry is not a comprehensive guide into native bundles, and we plan to post on this topic more. However, I am sure that once you play with native bundles you will have a lot of questions. We may not have all the answers, but please do not hesitate to ask! Knowing all of the questions is the first step to finding all of the answers.

    Read the article

  • Help with a simple incremental backup script

    - by Evan
    I'd like to run the following incomplete script weekly in as a cron job to backup my home directory to an external drive mounted as /mnt/backups #!/bin/bash # TIMEDATE=$(date +%b-%d-%Y-%k:%M) LASTBACKUP=pathToDirWithLastBackup rsync -avr --numeric-ids --link-dest=$LASTBACKUP /home/myfiles /mnt/backups/myfiles$TIMEDATE My first question is how do I correctly set LASTBACKUP to the the the directory in /backs most recently created? Secondly, I'm under the impression that using --link-desk will mean that files in previous backups will not will not copied in later backups if they still exist but will rather symbolically link back to the originally copied files? However, I don't want to retain old files forever. What would be the best way to remove all the backups before a certain date without losing files that may think linked in those backups by currents backups? Basically I'm looking to merge all the files before a certain date to a certain date if that makes more sense than the way I initially framed the question :). Can --link-dest create hard links, and if so, just deleting previous directories wouldn't actually remove linked file? Finally I'd like to add a line to my script that compresses each newly created backup folder (/mnt/backups/myfiles$TIMEDATE). Based on reading this question, I was wondering if I could just use this line gzip --rsyncable /backups/myfiles$TIMEDATE after I run rsync so that sequential rsync --link-dest executions would find already copied and compressed files? I know that's a lot, so many thanks in advance for your help!!

    Read the article

  • ?Portal Content Personalization

    - by john.brunswick
    To make the most effective use of a portal and content management platform, personalization is a critical component of delivering the most value to end users. Regardless of what type of constituents you may be serving, content relevance is critical to support business goals like self-service, communication within a geographically distributed organization, lead generation and customer loyalty effectively. This especially holds true when serving external parties, as they generally have a lower threshold for digging through your site to locate a particular item of interest and are apt to leave or dial a helpdesk if their efforts cannot locate the relevant information. Optimal delivery of content can be achieved through a variety of methods, but it is generally a blend of security and filtering via meta data that can drive the most return with the least amount of upfront effort and ongoing upkeep. In a portal environment various platform components have their strong suits and by combining the capabilities of enterprise portal and content platforms much of the groundwork for personalization can be achieved in a configuration-based manner. In our discussion we will cover terminology and concepts, example scenarios and technical implementation strategies to help showcase how personalization of content can be achieved within a portal from a technical and strategic standpoint. Read on to better understand the chart below and the components at our disposal to personalize content delivery. Read on... click here to view a full size chart

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-23

    - by Bob Rhubart
    Why is Java EE 6 better than Spring? | Arun Gupta blogs.oracle.com "While Spring was revolutionary in its time and is still very popular and quite main stream in the same way Struts was circa 2003, it really is last generation's framework," says Arun Gupta. "Some people are even calling it legacy." OWSM vs. OEG - When to use which component - 11g | Prakash Yamuna blogs.oracle.com Prakash Yamuna shares a brief but informative summary. Webcast Q&A: Demystifying External Authorization blogs.oracle.com The slide deck, a transcript of the audience Q&A, and a link to replay of the recent Oracle Entitlements Server webcast featuring Tanya Baccam from SANS Institute. Anil Gaur on Cloud Computing Support in Java EE 7 www.infoq.com InfoQ's Srini Penchikala talks with Anil Gaur, Vice President of Software Development at Oracle, about cloud computing support in Java EE 7, project road map and timeline, cloud API in Java EE 7, and cloud development and deployment tools. Want to Patch your Red Hat Linux Kernel Without Rebooting? | Lenz Grimmer blogs.oracle.com Lenz Grimmer shares info an resources for those interested in learning more about KSplice. Oracle Linux Newsletter, March Edition www.oracle.com Get a spring dose of Linux goodness. Oracle Enterprise Gateway: Integration with Oracle Service Bus and Oracle Web Services Manager www.oracle.com Oracle Enterprise Gateway and Oracle Web Services Manager are central points of a SOA initiative when security is paramount. In this article, William Markito Oliveira and Fabio Mazanatti describe how to integrate these products with Oracle Service Bus. Thought for the Day "We always strain at the limits of our ability to comprehend the artifacts we construct — and that's true for software and for skyscrapers." — James Gosling

    Read the article

  • Convertion of tiff image in Python script - OCR using tesseract

    - by PYTHON TEAM
    I want to convert a tiff image file to text document. My code perfectly as I expected to convert tiff images with usual font but its not working for french script font . My tiff image file contains text. The font of text is in french script format.I here is my code import Image import subprocess import util import errors tesseract_exe_name = 'tesseract' # Name of executable to be called at command line scratch_image_name = "temp.bmp" # This file must be .bmp or other Tesseract-compatible format scratch_text_name_root = "temp" # Leave out the .txt extension cleanup_scratch_flag = True # Temporary files cleaned up after OCR operation def call_tesseract(input_filename, output_filename): """Calls external tesseract.exe on input file (restrictions on types), outputting output_filename+'txt'""" args = [tesseract_exe_name, input_filename, output_filename] proc = subprocess.Popen(args) retcode = proc.wait() if retcode!=0: errors.check_for_errors() def image_to_string(im, cleanup = cleanup_scratch_flag): """Converts im to file, applies tesseract, and fetches resulting text. If cleanup=True, delete scratch files after operation.""" try: util.image_to_scratch(im, scratch_image_name) call_tesseract(scratch_image_name, scratch_text_name_root) text = util.retrieve_text(scratch_text_name_root) finally: if cleanup: util.perform_cleanup(scratch_image_name, scratch_text_name_root) return text def image_file_to_string(filename, cleanup = cleanup_scratch_flag, graceful_errors=True): If cleanup=True, delete scratch files after operation.""" try: try: call_tesseract(filename, scratch_text_name_root) text = util.retrieve_text(scratch_text_name_root) except errors.Tesser_General_Exception: if graceful_errors: im = Image.open(filename) text = image_to_string(im, cleanup) else: raise finally: if cleanup: util.perform_cleanup(scratch_image_name, scratch_text_name_root) return text if __name__=='__main__': im = Image.open("/home/oomsys/phototest.tif") text = image_to_string(im) print text try: text = image_file_to_string('fnord.tif', graceful_errors=False) except errors.Tesser_General_Exception, value: print "fnord.tif is incompatible filetype. Try graceful_errors=True" print value text = image_file_to_string('fnord.tif', graceful_errors=True) print "fnord.tif contents:", text text = image_file_to_string('fonts_test.png', graceful_errors=True) print text

    Read the article

  • Scheduling of jobs in the presence of constraints in Java

    - by Asgard
    I want to know how to implement a solution to this problem: A task is performed by running, by more people, some basic jobs with known duration in time units (days, months, etc..). The execution of the jobs could lead to the existence of time constraints: a job, for example, can not start if it is not over another (or others) and so on. I want to design and build an application to check the correctness of jobs activities and to propose a schedule of jobs, if any, which is respectful of the constraints. Input must provide the jobs and associated constraints. The expected output is the scheduling of jobs. The specification of an elementary job consists of the pair <jobs-id, duration> A constraint is expressed by means of a quintuple of the type <S/E, id-job1, B/A, S/E, id-job2> the beginning (S) or the end (E) of a jobs Id-job1, must take place before (B) / after (A) of the beginning (S) / end (E) of the Id-job2. If there are no dependencies between some jobs, then jobs can be done before, in parallel. As a simple example, consider the input: jobs jobs(0, 3) jobs(1, 4) jobs(2, 5) jobs(3, 3) jobs(4, 3) constraints constraints(S, 1, A, E, 0) constraints(S, 4, A, E, 2) Possible output: t 0 1 2 3 4 0 * - * * - 1 * - * * - 2 * - * * - 3 * - * * - 4 - * * - - 5 - * * - - 6 - * - - * 7 - * - - * 8 - * - - * 9 - - - - * How to code an efficient java scheduler(avoiding the intense backtracking if is possible) to manage the jobs with these constraints, as described??? I have seen a discussion on a thread in a forum where an user seems has solved the problem easily, but He haven't given enough details to the users to compile a working project(I'm noob), and I'm interested to know an effective implementation of the solution (without using external libraries). If someone help me, I'll give to him a very good feedback ;)

    Read the article

  • Building Web Applications with ACT and jQuery

    - by dwahlin
    My second talk at TechEd is focused on integrating ASP.NET AJAX and jQuery features into websites (if you’re interested in Silverlight you can download code/slides for that talk here). The content starts out by discussing ScriptManager features available in ASP.NET 3.5 and ASP.NET 4 and provides details on why you should consider using a Content Delivery Network (CDN).  If you’re running an external facing site then checking out the CDN features offered by Microsoft or Google is definitely recommended. The talk also goes into the process of contributing to the Ajax Control Toolkit as well as the new Ajax Minifier tool that’s available to crunch JavaScript and CSS files. The extra fun starts in the next part of the talk which details some of the work Microsoft is doing with the jQuery team to donate template, globalization and data linking code to the project. I go into jQuery templates, data linking and a new globalization option that are all being worked on. I want to thank Stephen Walther, Dave Reed and James Senior for their thoughts and contributions since some of the topics covered are pretty bleeding edge right now.The slides and sample code for the talk can be downloaded below.     Download Slides and Samples

    Read the article

  • Ignoring Robots - Or Better Yet, Counting Them Separately

    - by [email protected]
    It is quite common to have web sessions that are undesirable from the point of view of analytics. For example, when there are either internal or external robots that check the site's health, index it or just extract information from it. These robotic session do not behave like humans and if their volume is high enough they can sway the statistics and models.One easy way to deal with these sessions is to define a partitioning variable for all the models that is a flag indicating whether the session is "Normal" or "Robot". Then all the reports and the predictions can use the "Normal" partition, while the counts and statistics for Robots are still available.In order for this to work, though, it is necessary to have two conditions:1. It is possible to identify the Robotic sessions.2. No learning happens before the identification of the session as a robot.The first point is obvious, but the second may require some explanation. While the default in RTD is to learn at the end of the session, it is possible to learn in any entry point. This is a setting for each model. There are various reasons to learn in a specific entry point, for example if there is a desire to capture exactly and precisely the data in the session at the time the event happened as opposed to including changes to the end of the session.In any case, if RTD has already learned on the session before the identification of a robot was done there is no way to retract this learning.Identifying the robotic sessions can be done through the use of rules and heuristics. For example we may use some of the following:Maintain a list of known robotic IPs or domainsDetect very long sessions, lasting more than a few hours or visiting more than 500 pagesDetect "robotic" behaviors like a methodic click on all the link of every pageDetect a session with 10 pages clicked at exactly 20 second intervalsDetect extensive non-linear navigationNow, an interesting experiment would be to use the flag above as an output of a model to see if there are more subtle characteristics of robots such that a model can be used to detect robots, even if they fall through the cracks of rules and heuristics.In any case, the basic and simple technique of partitioning the models by the type of session is simple to implement and provides a lot of advantages.

    Read the article

  • Python productivity VS Java Productivity

    - by toc777
    Over on SO I came across a question regarding which platform, Java or Python is best for developing on Google AppEngine. Many people were boasting of the increased productivity gained from using Python over Java. One thing I would say about the Python vs Java productivity argument, is Java has excellent IDE's to speed up development where as Python is really lacking in this area because of its dynamic nature. So even though I prefer to use Python as a language, I don't believe it gives quite the productivity boost compared to Java especially when using a new framework. Obviously if it were Java vs Python and the only editor you could use was VIM then Python would give you a huge productivity boost but when IDE's are brought into the equation its not as clear cut. I think Java's merits are often solely evaluated on a language level and often on out dated assumptions but Java has many benefits external to the language itself, e.g the JVM (often criticized but offers huge potential), excellent IDE's and tools, huge numbers of third party libraries, platforms etc.. Question, Does Python/related dynamic languages really give the huge productivity boosts often talked about? (with consideration given to using new frameworks and working with medium to large applications).

    Read the article

  • Oracle 64-bit assembly throws BadImageFormatException when running unit tests

    - by pjohnson
    We recently upgraded to the 64-bit Oracle client. Since then, Visual Studio 2010 unit tests that hit the database (I know, unit tests shouldn't hit the database--they're not perfect) all fail with this error message:Test method MyProject.Test.SomeTest threw exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=4.112.3.0, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format.I resolved this by changing the test settings to run tests in 64-bit. From the Test menu, go to Edit Test Settings, and pick your settings file. Go to Hosts, and change the "Run tests in 32 bit or 64 bit process" dropdown to "Run tests in 64 bit process on 64 bit machine". Now your tests should run.This fix makes me a little nervous. Visual Studio 2010 and earlier seem to change that file for no apparent reason, add more settings files, etc. If you're not paying attention, you could have TestSettings1.testsettings through TestSettings99.testsettings sitting there and never notice the difference. So it's worth making a note of how to change it in case you have to redo it, and being vigilant about files VS tries to add.I'm not entirely clear on why this was even a problem. Isn't that the point of an MSIL assembly, that it's not specific to the hardware it runs on? An IL disassembler can open the Oracle.DataAccess.dll in question, and in its Runtime property, I see the value "v4.0.30319 / x64". So I guess the assembly was specifically build to target 64-bit platforms only, possibly due to a 64-bit-specific difference in the external Oracle client upon which it depends. Most other assemblies, especially in the .NET Framework, list "msil", and a couple list "x86". So I guess this is another entry in the long list of ways Oracle refuses to play nice with Windows and .NET.If this doesn't solve your problem, you can read others' research into this error, and where to change the same test setting in Visual Studio 2012.

    Read the article

  • Why am I getting this "Connection to PulseAudio failed" error?

    - by Dave M G
    I have a computer that runs Mythbuntu 11.10. It has an external USB Kenwood Digital Audio device. When I open up pavucontrol, I get this message: If I do as the message suggests and run start-pulseaudio-x11, I get this output: $ start-pulseaudio-x11 Connection failure: Connection refused pa_context_connect() failed: Connection refused How do I correct this error? Update: Somewhere during the course of doing the suggested tests in the comments, a new audio device has now become visible in my sound settings. I have not attached or made any new device, so this must be the result of of some setting change. The device I use and know about is the Kenwood Audio device. The "GF108" device will play sound through the Kenwood anyway, but not reliably: Command line output as requested in the comments: $ ls -l ~/.pulse* -rw------- 1 mythbuntu mythbuntu 256 Feb 28 2011 /home/mythbuntu/.pulse-cookie /home/mythbuntu/.pulse: total 200 -rw-r--r-- 1 mythbuntu mythbuntu 8192 Oct 23 01:38 2b98330d36bf53bb85c97fc300000008-card-database.tdb -rw-r--r-- 1 mythbuntu mythbuntu 69 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-sink -rw-r--r-- 1 mythbuntu mythbuntu 68 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-source -rw-r--r-- 1 mythbuntu mythbuntu 49152 Oct 14 12:30 2b98330d36bf53bb85c97fc300000008-device-manager.tdb -rw-r--r-- 1 mythbuntu mythbuntu 61440 Oct 23 01:40 2b98330d36bf53bb85c97fc300000008-device-volumes.tdb lrwxrwxrwx 1 mythbuntu mythbuntu 23 Nov 16 22:50 2b98330d36bf53bb85c97fc300000008-runtime -> /tmp/pulse-EAwvLIQZn7e8 -rw-r--r-- 1 mythbuntu mythbuntu 77824 Nov 1 12:54 2b98330d36bf53bb85c97fc300000008-stream-volumes.tdb And yet more requested command line output: $ ps auxw|grep pulse 1000 2266 0.5 0.2 294184 9152 ? S<l Nov16 4:26 pulseaudio -D 1000 2413 0.0 0.0 94816 3040 ? S Nov16 0:00 /usr/lib/pulseaudio/pulse/gconf-helper 1000 4875 0.0 0.0 8108 908 pts/0 S+ 12:15 0:00 grep --color=auto pulse

    Read the article

  • Can't write to NTFS formatted drives

    - by mloman
    I'm not sure what has happened, but I've all of a sudden lost write access to any of my NTFS external drives. I installed a few games and apps from the software center, and now I can't make new folders or copy and paste files to anything that is NTFS. Everything is now read only, and I've tried so many things to fix it, but it seems hopeless. Just to check if it wasn't the drives themselves, I made a little ntfs formatted truecrypt volume, and a fat formatted volume. And yes, it seems that Ubuntu is blocking me from writing anything to NTFS. What happened here? Whats a way I can simply get write access to my NTFS drives, so I can just backup all my stuff. I'll probably reinstall Ubuntu. Please help. UPDATE (and thanks everyone for their quick replies) The problem has been solved. Prior to noticing that I had lost NTFS write permission, I had installed GParted from the software center, and there was an extension called ntfsprogs that came with it. During my search for a solution to the problem, I uninstalled GParted (as that was one of the apps I installed just before the problem). But that did not solve the problem. I came across an app called 'NTFS Configuration Tool'. When I installed this, it said that the ntfsprogs extension needed to be removed (so I guess uninstalling GPARTED, didn't remove the ntfsprog extension). I launched the NTFS Configuration Tool and now I have write access to NTFS drives. Unfortunately, I didn't check if I had write permission prior to launching the NTFS Configuration Tool, so I'm not sure whether the NTFS Configuration Tool, or the un-installation of ntfsprog gave me back NTFS write permission. Hopefully if another newbee encounters this problem, they'll come across this page and know what to do.

    Read the article

  • crash when using wireless network

    - by Erik
    ubuntu 12.04 32 bit When I start up my system almost instantly freezes, I found that the problem lays with my wireless network, because the crash is always t the same time as the message that tells me I am connected to a network when I start up connected to a cable everything works fine, I can even see that I am connected to our wireless network, until I unplug the ethernet cable, then it all freezes again the same happened when I boot from a live cd (from usb) The problem only occurs on my netbook, my laptop doesn't give any problems. I don't know if anyone else has had similar problems, or if anyone knows a solution... Thanks in advance! greetings Erik EDIT (21/3/2012): it seems there is some external factor involved, I'm using my netbook for about 50 minutes now while connected to the wireless network, and still it didn't crash strange enough, before these 50 minutes I had to put it down by holding the power button, because I decided to try it again, that time it did crash... before (while connected with an ethernet cable) I've been trying to get my Bluetooth working, that didn't succeed entirely, but I guess that might be a factor in the question why it doesn't crash at the moment now I rebooted, connected, and no crash... if it crashes again, I'll update this post again. in the meantime, if anyone has an idea on what could be a factor, and on how to create a crash using that factor, feel free to ask!

    Read the article

  • How do I set up live audio streams to a DLNA compliant device?

    - by Takkat
    Is there a way to stream the live output of the soundcard from our 12.04.1 LTS amd64 desktop to a DLNA-compliant external device in our network? Selecting media content in shared directories using Rygel, miniDLNA, and uShare is always fine - but so far we completely failed to get a live audio stream to a client via DLNA. Pulseaudio claims to have a DLNA/UPnP media server that together with Rygel is supposed to do just this. But we were unable to get it running. We followed the steps outlined in live.gnome.org, this answer here, and also in another similar guide. As soon as we select the local audio device, or our GST-Launch stream in the DLNA client Rygel displays the following message and the client states it reached the end of the playlist: (rygel:7380): Rygel-WARNING **: rygel-http-request.vala:97: Invalid seek request This is how we configured GST-Launch in rygel.conf: [GstLaunch] enabled=true launch-items=mypulseaudiosink mypulseaudiosink-title=Audio on @HOSTNAME@ mypulseaudiosink-mime=audio/x-wav mypulseaudiosink-launch=pulsesrc device=<device> ! wavpackenc For <device> we tried with the default sink name, this name appended with .monitor, and in addition with upnp-sink and upnp.monitor that was created when we selected DLNA media server from paprefs. We also tried to encode using lamemp3enc with no luck. These are our pulseaudio modules: http://paste.ubuntu.com/1202913/ These are our sinks: http://paste.ubuntu.com/1202916/ Did we miss any other additional configuration needed to get this running? Are there any other alternatives for sending the audio of our soundcard as live stream to a DLNA client?

    Read the article

  • Configuring Request-Reply in JMSAdapter

    - by [email protected]
    Request-Reply is a new feature in 11g JMSAdapter that helps you achieve the following:Allows you to combine Request and Reply in a single step. In the prior releases of the Oracle SOA Suite, you would require to configure two distinct adapters. Performs automatic correlation without you needing to configure BPEL "correlation sets". This would work seamlessly in Mediator and BPMN as well.In order to configure the JMSAdapter Request-Reply, please follow these steps:1) Drag and drop a JMSAdapter onto the "External References" swim lane in your composite editor. 2) Enter default values for the first few screens in the JMS Adapter wizard till you hit the screen where the wizard prompts you to enter the operation name. Select "Request-Reply" as the "Operation Type" and Asynchronous as "Operation Name".3) Select the Request and Reply queues in the following screens of the wizard. The message will be en-queued in the "Request" queue and the reply will be returned in the "Reply" queue. The reason I have used such a selector is that the back-end system that reads from the request queue and generates the response in the response queue actually generates more than one response and hence I must use a filter to exclude the unwanted responses.4) Select the message schema for request as well as response. 5) Add an <invoke> activity in BPEL corresponding to the JMS Adapter partner link. Please note that I am setting an additional header as my third-party application requires this.6) Add a <receive> activity just after the <invoke> and select the "Reply" operation. Please make sure that the "Create Instance" option is unchecked.Your completed BPEL process will something like this:

    Read the article

  • Demantra USA Based Companies and SOX Compliance

    - by user702295
    A USA based company is assessing Demantra Trade Promotion Management (TPM) capability.  It appears that SOX is necessary in their case due to the nature of what TPM does and the necessity for auditability.  Do we have any detail on SOX compliance for Demantra? Answser ------- SOX compliance with regards to IT: 1.  Requires auditing of data changes done by who, what, when     a. Audit trail profiles can be set up for key financial series and view them in audit trail reports     b. One functionality we do not have which typically is asked for is user login history. We have only        active sessions, history is not available. 2.  Segregation of duties     a. With respect to TPM, you could have deduction and financial analyst for settlement be different        from promotion creator, promotion approver or sales team.     b. Budget Approver for funds can be different from funds consumer.     c. Promotion creator can be different than promotion approver     d. For a US customer you may have to write some custom scripts to capture promotion status change        and produce an external report as part of compliance. One additional requirement is transparency of forward commitments entered into with retailers / distributors for trade spending, promotions.  Outside of Demantra - Consumer Goods Trade Funds Analytics.

    Read the article

  • Internet suddenly stopped working

    - by user95629
    I was using NoMachine to access Ubuntu 11.10 when suddenly the internet on Ubuntu stopped working. Now Ubuntu doesn't recognize the internet at all (including locally). Symptoms: Doesn't recognize the fact that there is an ethernet cable attached. (I've tried switching cables, didn't work.) GUI says "No network devices available" Can't ping external IP addresses. Tried changing /etc/network/interfaces to eth1 instead of eth0 but it didn't work. ifconfig provides no output relating to eth0. When I type "/etc/init.d/networking restart" it tells me "Cannot find device 'eth0'/Failed to bring up eth0." The network settings had used a static IP address prior to the crash. I don't know why it won't recognize the ethernet (I think this is the fundamental problem, but I might be wrong) and why it stopped working so suddenly--I haven't installed any updates lately. Any ideas? Any more information I should provide? EDIT: lspci recognizes the Ethernet controller; netstat does not. lspci calls it "Intel Corporation Device 0000"

    Read the article

  • Lazy coding is fun

    - by Anthony Trudeau
    Every once in awhile I get the opportunity to write an application that is important enough to do, but not important enough to do the right way -- meaning standards, best practices, good architecture, et al.  I call it lazy coding.  The industry calls it RAD (rapid application development). I started on the conversion tool at the end of last week.  It will convert our legacy data to a completely new system which I'm working on piece by piece.  It will be used in the future, but only the new parts because it'll only be necessary to convert the individual pieces of the data once.  It was the perfect opportunity to just whip something together, but it was still functional unlike a prototype or proof of concept.  Although I would never write an application like this for a customer (internal or external) this methodology (if you can call it that) works great for something like this. I wouldn't be surprised if I get flamed for equating RAD to lazy coding or lacking standards, best practice, or good architecture.  Unfortunately, it fits in the current usage.  Although, it's possible to create a good, maintainable application using the RAD methodology, it's just too ripe for abuse and requires too much discipline for someone let alone a team to do right. Sometimes it's just fun to throw caution to the wind and start slamming code.

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >