Search Results

Search found 52375 results on 2095 pages for 'process id'.

Page 571/2095 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • How to remove HDD Low virus

    - by samsudeen
    “HDD Low virus” is a new  fake system optimizer application which started affecting all  the Windows ( XP, vista, Windows 7) based computers world wide starting from Monday. It gets installed to the computers without notice by passing all our antivirus software. The infected computers will suddenly popup a system error  similar to the below screen shot and tries to shut down the computer.   Though the major anti virus companies have not yet release an update for this virus, We can easily remove this virus using the below steps Steps to remove HDD Low virus Press Alt+Ctrl+Delete and go the the Task Manager -> Process and kill the process with name [random number].exe ( e.g 123410.exe) Go to Run -> type msconfig to launch the System Configuration utility. In the Start up Tab un check  all the services with random name (e.g jygkgs.exe) and note folder path of the service in the Command column. Go to that folder path and delete all the exe files with random name manually ( It is recommended to use command prompt to delete the files) Delete all the HDD low files in the below path %Desktop%\HDD Low.lnk %Programs%\HDD Low\Uninstall HDD Low.lnk %Programs%\HDD Low\HDD Low.lnk Open registry using Run-> regedit.exe search for the below key and delete software\Microsoft\Windows\CurrentVersion\Run [random number].exe” Restart the computer Also update your anti virus definition and run a full scan of your computer to remove any affected files. This article titled,How to remove HDD Low virus, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How to use gps receiver bu-353

    - by Parimal
    Hi I have a gps receiver bu-353 with usb interface i want to know how can i use it under ubuntu I ran the following command gpsd -n -N -D 2 /dev/ttyUSB0 I got the output as: gpsd: launching (Version 2.94) gpsd: listening on port gpsd gpsd: running with effective group ID 1000 gpsd: running with effective user ID 1000 gpsd: opening GPS data source type 3 at '/dev/ttyUSB0' gpsd: speed 38400, 8N1 gpsd: Garmin: garmin_gps Linux USB module not active. gpsd: speed 9600, 8O1 gpsd: speed 38400, 8N1 gpsd: gpsd_activate(): opened GPS (fd 6) gpsd: speed 4800, 8N1 gpsd: NTPD ntpd_link_activate: 0 gpsd: /dev/ttyUSB0 identified as type SiRF binary (2.687608 sec @ 4800bps) gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client after this i started tangoGPS, which said no gps and no gpsd found

    Read the article

  • How to check a digital certificate?

    - by StackedCrooked
    I have extracted a certificate from a cable modem. Now I want to verify if this certificate is valid. If I understand correctly, the verification process consists of having the issuer sign the subject's public key and then comparing the result with the subject's signature. This signing process is done using the issuer's private key, which nobody but the issuer has access to. So even if I have both certificates on my PC, there is no way for me to verify the subject's validity. From this I can only conclude that the verification must be implemented as a remote service. The problem is that I don't know what remote service I need to access to verify this certificate. The issuer is "AVM GmbH Cable Modem Root Certificate Authority". How can I find the webservice for verification? Is there standard lookup mechanism for this?

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • How to use dd to make splitted ISO images from an storage device?

    - by Gustavo Bandeira
    This is a double question, I just hope it's valid. I need to know how to use dd to make splitted ISO images from some storage device, I'm doing it through SSH: the process is slow and the risk of faling at the mid of the operation (1) is high then I need to know how to make these splitted ISO images from my storage device. (2) I'm searching for some reference on dd, it could be a book or a good website about it for when any doubt arises. 1 - I'm doing it on a ~60GB storage device, it took me a whole day to copy ~10GB from this disk. 2 - For curious people, I'm trying to recover an accidentaly deleted file from an iPod, until now I've been able to make the whole process, I just need to improve it beucase I left it copying the disk yesterday: Today it gave me an error when it was at ~10GB.

    Read the article

  • JBoss Seam project can not be run/deployed

    - by user1494328
    I created sample application in Seam framework (Seam Web Project) and JBoss Server 7.1. When I try run application, console dislays: 23:29:35,419 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-3) MSC00001: Failed to start service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_24] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_24] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:85) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.jboss.jca.common.metadata.ParserException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.jca.common.metadata.ds.DsParser.parseDataSources(DsParser.java:183) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:119) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:82) at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:80) ... 6 more 23:29:35,452 INFO [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015877: Stopped deployment secoundProject-ds.xml in 1ms 23:29:35,455 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015863: Replacement of deployment "secoundProject-ds.xml" by deployment "secoundProject-ds.xml" was rolled back with failure message {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"secoundProject-ds.xml\".PARSE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"secoundProject-ds.xml\".PARSE: Failed to process phase PARSE of deployment \"secoundProject-ds.xml\""}} 23:29:35,457 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: Starting deployment of "secoundProject-ds.xml" 23:29:35,920 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-1) MSC00001: Failed to start service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_24] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_24] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:85) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.jboss.jca.common.metadata.ParserException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.jca.common.metadata.ds.DsParser.parseDataSources(DsParser.java:183) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:119) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:82) at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:80) ... 6 more 23:29:35,952 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014774: Service status report JBAS014777: Services which failed to start: service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" My secoundProject-ds.xml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE datasources PUBLIC "-//JBoss//DTD JBOSS JCA Config 1.5//EN" "http://www.jboss.org/j2ee/dtd/jboss-ds_1_5.dtd"> <datasources> <local-tx-datasource> <jndi-name>secoundProjectDatasource</jndi-name> <use-java-context>true</use-java-context> <connection-url>jdbc:mysql://localhost:3306/database</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>root</user-name> <password></password> </local-tx-datasource> </datasources> When I comment tags errors disappear, but application is disabled in browser (The requested resource (/secoundProject/) is not available.). What should I do to fix this problem?

    Read the article

  • mount network drive

    - by CaptnLenz
    since i updated my ubuntu to natty narwhal(from 10.04), my mount script doesn't work anymore. The scripts mounts a folder from a NAS (WD mybookworld) in the local network to a folder in my home folder. script looked like that: #!/bin/bash sudo mount //192.168.2.222/Public/Shared\ Music/ /home/simon/Musik/ error: mount: wrong fs type, bad option, bad superblock on //192.168.2.222/Public/Shared Music/, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program) Manchmal liefert das Syslog wertvolle Informationen – versuchen Sie dmesg | tail oder so now, because the script doesn't work anymore i decided to add the mount-process to my fstab, because the network drive should be mounted on every startup. My fstab entry looks like this: //192.168.2.222/Public/Shared\ Music/ /home/simon/Musik cifs credentials=/home/simon/.smbcredentials 0 0 But it doesn't work, too. I get a message during the startup process, that Musik couldn't be mounted. Are there any log files i can check for errors? The system is a fresh installed 11.04. Greetings

    Read the article

  • New Bundling and Minification Support (ASP.NET 4.5 Series)

    - by ScottGu
    This is the sixth in a series of blog posts I'm doing on ASP.NET 4.5. The next release of .NET and Visual Studio include a ton of great new features and capabilities.  With ASP.NET 4.5 you'll see a bunch of really nice improvements with both Web Forms and MVC - as well as in the core ASP.NET base foundation that both are built upon. Today’s post covers some of the work we are doing to add built-in support for bundling and minification into ASP.NET - which makes it easy to improve the performance of applications.  This feature can be used by all ASP.NET applications, including both ASP.NET MVC and ASP.NET Web Forms solutions. Basics of Bundling and Minification As more and more people use mobile devices to surf the web, it is becoming increasingly important that the websites and apps we build perform well with them. We’ve all tried loading sites on our smartphones – only to eventually give up in frustration as it loads slowly over a slow cellular network.  If your site/app loads slowly like that, you are likely losing potential customers because of bad performance.  Even with powerful desktop machines, the load time of your site and perceived performance can make an enormous customer perception. Most websites today are made up of multiple JavaScript and CSS files to separate the concerns and keep the code base tight. While this is a good practice from a coding point of view, it often has some unfortunate consequences for the overall performance of the website.  Multiple JavaScript and CSS files require multiple HTTP requests from a browser – which in turn can slow down the performance load time.  Simple Example Below I’ve opened a local website in IE9 and recorded the network traffic using IE’s built-in F12 developer tools. As shown below, the website consists of 5 CSS and 4 JavaScript files which the browser has to download. Each file is currently requested separately by the browser and returned by the server, and the process can take a significant amount of time proportional to the number of files in question. Bundling ASP.NET is adding a feature that makes it easy to “bundle” or “combine” multiple CSS and JavaScript files into fewer HTTP requests. This causes the browser to request a lot fewer files and in turn reduces the time it takes to fetch them.   Below is an updated version of the above sample that takes advantage of this new bundling functionality (making only one request for the JavaScript and one request for the CSS): The browser now has to send fewer requests to the server. The content of the individual files have been bundled/combined into the same response, but the content of the files remains the same - so the overall file size is exactly the same as before the bundling.   But notice how even on a local dev machine (where the network latency between the browser and server is minimal), the act of bundling the CSS and JavaScript files together still manages to reduce the overall page load time by almost 20%.  Over a slow network the performance improvement would be even better. Minification The next release of ASP.NET is also adding a new feature that makes it easy to reduce or “minify” the download size of the content as well.  This is a process that removes whitespace, comments and other unneeded characters from both CSS and JavaScript. The result is smaller files, which will download and load in a browser faster.  The graph below shows the performance gain we are seeing when both bundling and minification are used together: Even on my local dev box (where the network latency is minimal), we now have a 40% performance improvement from where we originally started.  On slow networks (and especially with international customers), the gains would be even more significant. Using Bundling and Minification inside ASP.NET The upcoming release of ASP.NET makes it really easy to take advantage of bundling and minification within projects and see performance gains like in the scenario above. The way it does this allows you to avoid having to run custom tools as part of your build process –  instead ASP.NET has added runtime support to perform the bundling/minification for you dynamically (caching the results to make sure perf is great).  This enables a really clean development experience and makes it super easy to start to take advantage of these new features. Let’s assume that we have a simple project that has 4 JavaScript files and 6 CSS files: Bundling and Minifying the .css files Let’s say you wanted to reference all of the stylesheets in the “Styles” folder above on a page.  Today you’d have to add multiple CSS references to get all of them – which would translate into 6 separate HTTP requests: The new bundling/minification feature now allows you to instead bundle and minify all of the .css files in the Styles folder – simply by sending a URL request to the folder (in this case “styles”) with an appended “/css” path after it.  For example:    This will cause ASP.NET to scan the directory, bundle and minify the .css files within it, and send back a single HTTP response with all of the CSS content to the browser.  You don’t need to run any tools or pre-processor to get this behavior.  This enables you to cleanly separate your CSS into separate logical .css files and maintain a very clean development experience – while not taking a performance hit at runtime for doing so.  The Visual Studio designer will also honor the new bundling/minification logic as well – so you’ll still get a WYSWIYG designer experience inside VS as well. Bundling and Minifying the JavaScript files Like the CSS approach above, if we wanted to bundle and minify all of our JavaScript into a single response we could send a URL request to the folder (in this case “scripts”) with an appended “/js” path after it:   This will cause ASP.NET to scan the directory, bundle and minify the .js files within it, and send back a single HTTP response with all of the JavaScript content to the browser.  Again – no custom tools or builds steps were required in order to get this behavior.  And it works with all browsers. Ordering of Files within a Bundle By default, when files are bundled by ASP.NET they are sorted alphabetically first, just like they are shown in Solution Explorer. Then they are automatically shifted around so that known libraries and their custom extensions such as jQuery, MooTools and Dojo are loaded before anything else. So the default order for the merged bundling of the Scripts folder as shown above will be: Jquery-1.6.2.js Jquery-ui.js Jquery.tools.js a.js By default, CSS files are also sorted alphabetically and then shifted around so that reset.css and normalize.css (if they are there) will go before any other file. So the default sorting of the bundling of the Styles folder as shown above will be: reset.css content.css forms.css globals.css menu.css styles.css The sorting is fully customizable, though, and can easily be changed to accommodate most use cases and any common naming pattern you prefer.  The goal with the out of the box experience, though, is to have smart defaults that you can just use and be successful with. Any number of directories/sub-directories supported In the example above we just had a single “Scripts” and “Styles” folder for our application.  This works for some application types (e.g. single page applications).  Often, though, you’ll want to have multiple CSS/JS bundles within your application – for example: a “common” bundle that has core JS and CSS files that all pages use, and then page specific or section specific files that are not used globally. You can use the bundling/minification support across any number of directories or sub-directories in your project – this makes it easy to structure your code so as to maximize the bunding/minification benefits.  Each directory by default can be accessed as a separate URL addressable bundle.  Bundling/Minification Extensibility ASP.NET’s bundling and minification support is built with extensibility in mind and every part of the process can be extended or replaced. Custom Rules In addition to enabling the out of the box - directory-based - bundling approach, ASP.NET also supports the ability to register custom bundles using a new programmatic API we are exposing.  The below code demonstrates how you can register a “customscript” bundle using code within an application’s Global.asax class.  The API allows you to add/remove/filter files that go into the bundle on a very granular level:     The above custom bundle can then be referenced anywhere within the application using the below <script> reference:     Custom Processing You can also override the default CSS and JavaScript bundles to support your own custom processing of the bundled files (for example: custom minification rules, support for Saas, LESS or Coffeescript syntax, etc). In the example below we are indicating that we want to replace the built-in minification transforms with a custom MyJsTransform and MyCssTransform class. They both subclass the CSS and JavaScript minifier respectively and can add extra functionality:     The end result of this extensibility is that you can plug-into the bundling/minification logic at a deep level and do some pretty cool things with it. 2 Minute Video of Bundling and Minification in Action Mads Kristensen has a great 90 second video that shows off using the new Bundling and Minification feature.  You can watch the 90 second video here. Summary The new bundling and minification support within the next release of ASP.NET will make it easier to build fast web applications.  It is really easy to use, and doesn’t require major changes to your existing dev workflow.  It is also supports a rich extensibility API that enables you to customize it however you want. You can easily take advantage of this new support within ASP.NET MVC, ASP.NET Web Forms and ASP.NET Web Pages based applications. Hope this helps, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • ODEE Green Field (Windows) Part 5 - Deployment and Validation

    - by AndyL-Oracle
    And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over the basic process that I intended to show with installing an ODEE on a green field server. I walked you through the basic installation of Oracle 11g database In part 2, I covered the installation of WebLogic application server. In part 3, I showed you how to install SOA Suite for WebLogic. In part 4, we did the first part of the installation of ODEE itself. What remains after all of that, is the deployment of the ODEE components onto the database and application server - so let's get to it! DATABASE First, we'll deploy the schemas to the database. The schemas are created during the ODEE installation according to the responses provided during the install process. To deploy the schemas, you'll need to login to the database server in your green field environment. Open a command line and CD into ODEE_HOME\documaker\database\oracle11g.Run SQLPLUS as SYSDBA and execute dmkr_admin.sql:  sqlplus / as sysdba @dmkr_admin.sql Execute dmkr_asline.sql, dmkr_admin_correspondence_example.sql.  If you require additional languages, run the appropriate SQL scripts (e.g. dmkr_asline_es.sql for Spanish). APPLICATION SERVER Next, we'll deploy the WebLogic domain and it's components - Documaker web services, Documaker Interactive, Documaker dashboard, and more. To deploy the components, you'll need to login to the application server in your green field environment. 1. Open Windows Explorer and navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts.2. Using a text editor such as Notepad++, modify weblogic_installation_properties and set location of MIDDLEWARE_HOME and ODEE HOME. If you have used the defaults you’ll probably need to change the E: to C: and that’s it. Save the changes.3. Continuing in the same directory, use your text editor to modify set_middleware_env.cmd and set the drive and path to MIDDLEWARE_HOME. If you have used the defaults you’ll probably need to just change E: to C: and that’s it. Save the changes.4. In the same directory, execute wls_create_domain.cmd by double-clicking it. This should run to completion. If it does not, review any errors and correct them, and rerun the script.5. In the same directory, execute wls_add_correspondence.cmd by double-clicking it - again this should run to completion. 6. Next, we'll start the AdminServer - this is the main WebLogic domain server. To start it, use Windows Explorer and navigate to MIDDLEWARE_HOME\user_projects\domains\idocumaker_domain. Double-click startWebLogic.cmd and the server startup will begin. Once you see output that indicates that the server status changed to RUNNING you may proceed.  a. Note: if you saw database connection errors, you probably didn’t make sure your database name and connection type match. You can change this manually in the WebLogic Console. Open a browser and navigate to http://localhost:7001/console (replace localhost with the name of your application server host if you aren't opening the browser on the server), and login with the the weblogic credential you provided in the ODEE installation process. b. Once you're logged in, open Services?Data Sources. Select dmkr_admin and click Connection Pool.  c. The end of the URL should match the connection type you chose. If you chose ServiceName, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<serviceName> and if you chose SID, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<SIDname> d. An example serviceName is a fully qualified DNS-style name, e.g. "idmaker.us.oracle.com". (It does not need to actually resolve in DNS). An example SID is just a name, e.g. IDMAKER. e. Save the change and repeat for the data source dmkr_asline.  f. You will also need to make the same changes in the ODEE_HOME/documaker/docfactory/config/context/.bindings file - open the file in a text editor, locate the URL lines and make the appropriate change, then save the file.  7. Back in the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory, execute create_users_groups.cmd. 8. In the same directory, execute create_users_groups_correspondence_example.cmd. 9. Open a browser and navigate to http://localhost:7001/jpsquery. Replace localhost with the name of your application server host if you aren't running the browser on the application server. If you changed the default port for the AdminServer from 7001, use the port you changed it to. You should see output like this: 10. Start the WebLogic managed servers by opening a command prompt and navigating to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/. When you start the servers listed below, you will be prompted to enter the WebLogic credentials to start the server. You can prevent this by providing the credential in the startManagedwebLogic.cmd file for the WLS_USER and WLS_PASS values. Note that the credential will be stored in cleartext. To start the server, type in the command shown. a. Start the JMS Server: ./startManagedWebLogic.cmd jms_server b. Start Dashboard/Documaker Administrator: ./startManagedWebLogic.cmd dmkr_server c. Start Documaker Interactive for Correspondence: ./startManagedWebLogic.cmd idm_server SOA Composites  If you're planning on testing out the approval process components of BPEL that can be used with Documaker Interactive, then use the following steps to deploy the SOA composites. If you're not going to use BPEL, you can skip to the next section.1. Stop the servers listed in the previous section (Step 10) in the reverse order that they were started.2. Run the Domain configuration command: navigate to and execute MIDDLEWARE_HOME/wlserver_10.3/common/bin/config.cmd.3. Select Extend and click next. 4. Select the iDocumaker Domain and click Next. 5. Select the Oracle SOA Suite – 11.1.1.0 (this may automatically select other components which is OK). Click Next. 6. View the Configure JDBC resources screen. You should not make any changes. Click Next. 7. Check both connections and click Test Connections. After successful test, click Next. If the tests fail, something is broken. Go back to configure JDBC resources and check your service name/SID. 8. Check all schemas. Set a password (will be the same for all schemas). Enter the database information (service name, host name, port). Click Next. 9. Connections should test successfully. If not, go back and fix any errors. Click Next. 10. Click Next to pass through Optional Configuration. 11. Click Extend. 12. Click Done. 13. Open a terminal window and navigate to/execute: ODEE_HOME/documaker/j2ee/weblogic/oracle11g/bpel/antbuild.cmd14. Start the WebLogic Servers – AdminServer, jms_server, dmkr_server, idm_server. If you forgot how to do this, see the previous section Step 10. Note: if you previously changed the startManagedWebLogic.cmd script for WLS_USER and WLS_PASS you will need to make those changes again. 15. Start the WebLogic server soa_server1: MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/startManagedWebLogic.cmd soa_server116. Open a browser to http://localhost:7001/console and login. 17. Navigate to Services?Data Sources and select DMKR_ASLINE. 18. Click the Targets tab. Check soa_server1, then click Save. Repeat for the DMKR_ADMIN data source. 19. Open a command prompt and navigate to ODEE_HOME/j2ee/weblogic/oracle11g/scripts, then execute deploy_soa.cmd. That's it! (As if that wasn't enough?) DOCUMAKER Deploy the sample MRL resources by navigating to/executing ODEE_HOME/documaker/mstrres/dmres/deploysamplemrl.bat. You should see approximately 500 resources deployed into the database. Start the Factory Services. Start?Run?services.msc. Locate the service named "ODDF xxxx" and right-click, select Start. Note that each Assembly Line has a separate Factory setup, including its own Factory service and Docupresentment service. The services are named for the assembly line and the machine on which they are installed (because you could have multiple machines servicing a single assembly line, so this allows for easy scripting to control all the services if you choose to do so. Repeat for the Docupresentment service. Note that each Assembly Line has a separate Docupresentment. Using Windows Explorer, navigate to ODEE_HOME/documaker/mstrres/dmres/input and select one of the XML files, and copy it into ODEE_HOME/documaker/hotdirectory. Note: if you chose a different hot directory during installation, copy the file there instead. Momentarily you should see the XML file disappear! Open browser and navigate to http://localhost:10001/DocumakerDashboard (previous versions 12.0-12.2 use http://localhost:10001/dashboard) and verify that job processed successfully. Note that some transactions may fail if you do not have a properly configured email server, and this is ok. You can set up a simple SMTP server (just search the internet for "SMTP developer" and you'll get several to choose from.  So... that's it? Where are we at this point? You now have a completely functional ODEE installation, from soup to nuts as they say. You can further expand your installation by doing some of the following activities: clustering WebLogic services configuring WebLogic for redundancy configuring Oracle 11g for RAC adding additional Factory servers for redundancy/processing capacity setting up a real MRL (instead of the sample resources) testing Documaker Web Services for job submission and more!  I certainly hope you've enjoyed this and find it useful. If you find yourself running into trouble, visit the Oracle Community for Documaker - there is plenty of activity there and you can ask questions. For more concentrated assistance, you can engage an Oracle consultant who is a subject matter expert to assist you. Feel free to email me [andy (dot) little (at) oracle (dot) com] and I can connect you with the appropriate resource to get started. Best of luck! -Andy 

    Read the article

  • Coldfusion 8 crash

    - by Denis Topa
    I have a very big problem with coldfusion 8 on Windows server 2008 with IIS7. They are in production server and sometimes the the site is not available, I have to manually end the jrun.exe process from task manager and then the site is available. I realize that the process jrun.exe has abut 1.3Gb of memory used at the time it crash's. It happens 2-3 times a day, I have looking in the coldfusion logs and I did not found anything strange beside some warnings that some job exceeded the 300 sec of time execution. I forgot to mention that coldfusion is a 32bit application but the windows is a 64bit, this may be the problem? I'm not such good in coldfusion, so If someone knows how to troubleshoot please let me know Thanks!

    Read the article

  • Disaster Recovery Plan&ndash;Rebuild System Disk (Dell Server 2900 with PERC RAID controller)

    - by Jim Lahman
    Goal: Since the system disk is a RAID 1 mirrored set, we can rebuild the shadow set by replacing one of the good sets with a blank disk Steps Shutdown and power down server Remove the disk from bay 9, which is part of the system shadow set. Put this disk on the shelf Insert blank/old disk into the empty bay     Label the new disk before inserting it into the empty bay       Power up server During the booting process, the following message appears: “Some configured disks have been removed from your system…”       Press ‘C’ to Load Configuration utility             Press 'Y' to confirm to load the foreign configuration       In this example, the system shadow set is Disk Group 2.  (Before proceeding, confirm this is the disk group in your case).  Expanding the physical disks shows a disk in bay 8 and a missing disk in bay 9.  This is correct.   Now, we have to include the new inserted disk in this group       RAID controller reporting bay 9 is empty       There may be times when the new disk is seen as a foreign disk.  In this case, do the following:     Foreign disk is reported in bay 9 CTRL-N (Next Page) to Foreign Mgt All the disk groups will be displayed.  Typically, the disk group containing the foreign disk will be grey.  To remove the foreign disk Highlight Controller Press F2 Select Foreign Select Clear (do NOT import the configuration!)       Clear the foreign configuration Now the disk can be brought into the system shadow set disk group as a hot spare   To include the newly inserted disk into the system shadowset disk group, it must be brought in as a hot spare Highlight Disk Group 2 (VD Management) Hit F2 Select 'Manage Ded. HS'     Manage dedicated hot swap Select the disk in bay 9 (Hit space bar to select) Tab to 'OK'.  Hit the return key     Select hot spare to bring into RAID 1 mirror set   Rebuild automatically commences     Rebuild in process   Restart now or restart after rebuild is completed

    Read the article

  • How to use Ajax : Hovermenu Extender in ASP.NET

    - by SAMIR BHOGAYTA
    // It is a simple method, Other properties set by you which you want Step 1. Take the control that the extender is targeting.When the mouse cursor is over this control,the hover menu popup will be displayed. Step 2. Take one panel to display when mouse is over the target control Step 3. Set the following properties: TargetControlID = "ID of the panel or control which display when mouse is over the target control" PopupControlID = "ID of the control that the extender is targeting" PopupPosition = Left (Default), Right, Top, Bottom, Center.

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to Run Apache Commands From Oracle HTTP Server 11g Home

    - by Daniel Mortimer
    Every now and then you come across a problem when there is nothing in the "troubleshooting manual" which can help you. Instead you need to think outside the box. This happened to me two or three years back. Oracle HTTP Server (OHS) 11g did not start. The error reported back by OPMN was generic and gave no clue, and worse the HTTP Server error log was empty, and remained so even after I had increased the OPMN and HTTP Server log levels. After checking configuration files, operating system resources, etc I was still no nearer the solution. And then the light bulb moment! OHS is based on Apache - what happens if I attempt to start HTTP Server using the native apache command. Trouble was the OHS 11g solution has its binaries and configuration files in separate "home" directories ORACLE_HOME contains the binaries ORACLE_INSTANCE contains the configuration files How to set the environment so that native apache commands run without error? Eventually, with help from a colleague, the knowledge articleHow to Start Oracle HTTP Server 11g Without Using opmnctl [ID 946532.1]was born! To be honest, I cannot remember the exact cause and solution to that OHS problem two or three years ago. But, I do remember that an attempt to start HTTP Server using the native apache command threw back an error to the console which led me to discover the culprit was some unusual filesystem fault.The other day, I was asked to review and publish a new knowledge article which described how to use the apache command to dump a list of static and shared loaded modules. This got me thinking that it was time [ID 946532.1] was given an update. The resultHow To Run Native Apache Commands in an Oracle HTTP Server 11g Environment [ID 946532.1] Highlights: Title change Improved environment setting scripts Interactive, should be no need to manually edit the scripts (although readers are welcome to do so) Automatically dump out some diagnostic information Inclusion of some links to other troubleshooting collateral To view the knowledge article you need a My Oracle Support login. For convenience, you can obtain the scripts via the links below.MS Windows:Wrapper cmd script - calls main cmd script [After download, remove the ".txt" file extension]Main cmd script - sets OHS 11g environment to run Apache commands [After download, remove the ".txt" file extension]Unix:Shell script - sets OHS 11g environment to run Apache commands on Unix Please note: I cannot guarantee that the scripts held in the blog repository will be maintained. Any enhancements or faults will applied to the scripts attached to the knowledge article. Lastly, to find out more about native apache commands, refer to the Apache Documentation apachectl - Apache HTTP Server Control Interface[http://httpd.apache.org/docs/2.2/programs/apachectl.html]httpd - Apache Hypertext Transfer Protocol Server[http://httpd.apache.org/docs/2.2/programs/httpd.html]

    Read the article

  • Don’t miss the Oracle Webcast: Enabling Effective Decision Making with “One Source of the Truth” at BB&T

    - by Rob Reynolds
    Webcast Date:  September 17th, 2012  -  9 a.m. PT / 12 p.m. ET  BB&T Corporation (NYSE: BBT) is one of the largest financial services holding companies in the United States. One of their IT goals is to provide “one source of truth” to enable more effective decision making at the corporate and local level. By using Oracle’s Hyperion Enterprise Planning Suite and Oracle Essbase, BB&T streamlined their planning and financial reporting processes. Large volumes of data were consolidated into a single reporting solution giving stakeholders more timely and accurate information. By providing a central and automated collaboration tool, BB&T is able to prepare more accurate financial forecasts, rapidly consolidate large amounts of data, and make more informed decisions. Join us on September 17th for a live webcast to hear BB&T’s journey to achieve “One Source of Truth” and learn how Oracle’s Hyperion Planning Suite and Oracle’s Essbase allows you to: Adopt best practices like rolling forecasts and driver-based planning Reduce the time and effort dedicated to the annual budget process Reduce the time and effort dedicated to the annual budget process Remove forecasting uncertainty with predictive modeling capabilities Rapidly analyze shifting market conditions with a powerful calculation engine Prioritize resources effectively with complete visibility into all potential risks Link strategy and execution with integrated strategic, financial and operational planning Register here.

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • The Retail Week Conference 2012 - Interview with Paul Dickson

    - by user801960
    Recently we attended the Retail Week Conference at the Hilton London Metropole Hotel in London. The conference proves to be an inspirational meeting of retail minds and the insight gained from both the speakers and the other delegates is invaluable. In particular we enjoyed hearing from Charlie Mayfield, Chairman at John Lewis Partnership, about understanding how the consumer is viewing the ever changing world of retail; a session on how to encourage brand-loyal multichannel activities from Robin Terrell of House of Fraser with Alan White of the N Brown Group, Vince Russell from The Cloud and Lucy Neville-Rolfe from Tesco; and a fascinating session from Tim Steiner, Chief Executive of Ocado, about how the business makes it as easy as possible for consumers to shop on their various platforms, which included some surprising usage statistics. Oracle's own Vice President of Retail, Paul Dickson, also held a session with Richard Pennycook, Group Finance Director at Morrisons, about the role of technology in accelerating and supporting the business strategy. Morrisons' 'Evolve' programme takes a litte-and-often approach to updating its technology infrastructure to spread cost and keep the adoption process gentle for staff, and the session explored how the process works and how Oracle's technology underpins the programme to optimise their operations using actionable insight. We had a quick chat with Paul Dickson at the session to get his thoughts on the programme - the video is below. We also filmed the whole presentation, so keep checking back on this blog if you're interested in seeing it.

    Read the article

  • Why does Ubuntu reset brightness settings at the loading screen?

    - by leugim
    Since I first installed Ubuntu 11.10, I noticed that volume and screen brightness get reset every time Ubuntu starts. Why is this so? And what ways are there to keep brightness and volume levels after rebooting? I have found some scripts that change the screen-brightness at login. But this is not a good solution since login is slower because it seems to wait until the screen brightness is at the level specified by the script. After entering the password I see the screen brightness go down gradually. Only after this is complete (~1 or 2 seconds) does the background disappear and Unity come up. The screenbrightness is not remembered but instead redefined at login. So it gets remembered for the first part of the boot, then set to MAX and then again re-set to normal value by the script. My boot process is as follows: desired brightness: 2 (13,33%) / Max brightness: 15 (100%) Bios / brightness: OK GRUB (violet background color, white text) / brightness: OK Ubuntu loading screen with the dots / brightness: MAX (win7 loads with OK-brightness) User Login / brightness: MAX Unity starts / brightness: OK It seems to be more like a temporary patch than a actual solution. I'm looking for solutions that set the desired brightness permanently and consistently throughout the whole boot-process After updating to 12.04 the behavior is the same. I tried setpci -s 02:00.0 F4.B=XX The value of F4.B is always '0' regardless of what value I try to set it to (tried 0, ff, f, 5, etc) The solution in this answer does not have any noticeable effect: Desktop doesn't remember brightness settings after a reboot The variables at /sys/class/backlight/acpi_video0/ get changed if I use Fn+UP and Fn+DOWN Any help is appreciated. Thanks!

    Read the article

  • Create user in Oracle 11g with same priviledges as in Oracle 10g XE

    - by Álvaro G. Vicario
    I'm a PHP developer (not a DBA) and I've been working with Oracle 10g XE for a while. I'm used to XE's simplified user management: Go to Administration/ Users/ Create user Assign user name and password Roles: leave the default ones (connect and resource) Privileges: click on "Enable all" to select the 11 possible ones Create This way I get a user that has full access to its data and no access to everything else. This is fine since I only need it to develop my app. When the app is to be deployed, the client's DBAs configure the environment. Now I have to create users in a full Oracle 11g server and I'm completely lost. I have a new concept (profiles) and there're like 20 roles and hundreds of privileges in various categories. What steps do I need to complete in Oracle Enterprise Manager in order to obtain a user with the same privileges I used to assign in XE? ==== UPDATE ==== I think I'd better provide a detailed explanation so I make myself clearer. This is how I create a user in 10g XE: Roles: [X] CONNECT [X] RESOURCE [ ] DBA Direct Asignment System Privileges: [ ] CREATE DATABASE LINK [ ] CREATE MATERIALIZED VIEW [ ] CREATE PROCEDURE [ ] CREATE PUBLIC SYNONYM [ ] CREATE ROLE [ ] CREATE SEQUENCE [ ] CREATE SYNONYM [ ] CREATE TABLE [ ] CREATE TRIGGER [ ] CREATE TYPE [ ] CREATE VIEW I click on Enable All and I'm done. This is what I'm asked when doing the same in 11g: Profile: (*) DEFAULT ( ) WKSYS_PROF ( ) MONITORING_PROFILE Roles: CONNECT: [ ] Admin option [X] Default value Edit List: AQ_ADMINISTRATOR_ROLE AQ_USER_ROLE AUTHENTICATEDUSER CSW_USR_ROLE CTXAPP CWM_USER DATAPUMP_EXP_FULL_DATABASE DATAPUMP_IMP_FULL_DATABASE DBA DELETE_CATALOG_ROLE EJBCLIENT EXECUTE_CATALOG_ROLE EXP_FULL_DATABASE GATHER_SYSTEM_STATISTICS GLOBAL_AQ_USER_ROLE HS_ADMIN_ROLE IMP_FULL_DATABASE JAVADEBUGPRIV JAVAIDPRIV JAVASYSPRIV JAVAUSERPRIV JAVA_ADMIN JAVA_DEPLOY JMXSERVER LOGSTDBY_ADMINISTRATOR MGMT_USER OEM_ADVISOR OEM_MONITOR OLAPI_TRACE_USER OLAP_DBA OLAP_USER OLAP_XS_ADMIN ORDADMIN OWB$CLIENT OWB_DESIGNCENTER_VIEW OWB_USER RECOVERY_CATALOG_OWNER RESOURCE SCHEDULER_ADMIN SELECT_CATALOG_ROLE SPATIAL_CSW_ADMIN SPATIAL_WFS_ADMIN WFS_USR_ROLE WKUSER WM_ADMIN_ROLE XDBADMIN XDB_SET_INVOKER XDB_WEBSERVICES XDB_WEBSERVICES_OVER_HTTP XDB_WEBSERVICES_WITH_PUBLIC System Privileges: <Empty> Edit List: ACCESS_ANY_WORKSPACE ADMINISTER ANY SQL TUNING SET ADMINISTER DATABASE TRIGGER ADMINISTER RESOURCE MANAGER ADMINISTER SQL MANAGEMENT OBJECT ADMINISTER SQL TUNING SET ADVISOR ALTER ANY ASSEMBLY ALTER ANY CLUSTER ALTER ANY CUBE ALTER ANY CUBE DIMENSION ALTER ANY DIMENSION ALTER ANY EDITION ALTER ANY EVALUATION CONTEXT ALTER ANY INDEX ALTER ANY INDEXTYPE ALTER ANY LIBRARY ALTER ANY MATERIALIZED VIEW ALTER ANY MINING MODEL ALTER ANY OPERATOR ALTER ANY OUTLINE ALTER ANY PROCEDURE ALTER ANY ROLE ALTER ANY RULE ALTER ANY RULE SET ALTER ANY SEQUENCE ALTER ANY SQL PROFILE ALTER ANY TABLE ALTER ANY TRIGGER ALTER ANY TYPE ALTER DATABASE ALTER PROFILE ALTER RESOURCE COST ALTER ROLLBACK SEGMENT ALTER SESSION ALTER SYSTEM ALTER TABLESPACE ALTER USER ANALYZE ANY ANALYZE ANY DICTIONARY AUDIT ANY AUDIT SYSTEM BACKUP ANY TABLE BECOME USER CHANGE NOTIFICATION COMMENT ANY MINING MODEL COMMENT ANY TABLE CREATE ANY ASSEMBLY CREATE ANY CLUSTER CREATE ANY CONTEXT CREATE ANY CUBE CREATE ANY CUBE BUILD PROCESS CREATE ANY CUBE DIMENSION CREATE ANY DIMENSION CREATE ANY DIRECTORY CREATE ANY EDITION CREATE ANY EVALUATION CONTEXT CREATE ANY INDEX CREATE ANY INDEXTYPE CREATE ANY JOB CREATE ANY LIBRARY CREATE ANY MATERIALIZED VIEW CREATE ANY MEASURE FOLDER CREATE ANY MINING MODEL CREATE ANY OPERATOR CREATE ANY OUTLINE CREATE ANY PROCEDURE CREATE ANY RULE CREATE ANY RULE SET CREATE ANY SEQUENCE CREATE ANY SQL PROFILE CREATE ANY SYNONYM CREATE ANY TABLE CREATE ANY TRIGGER CREATE ANY TYPE CREATE ANY VIEW CREATE ASSEMBLY CREATE CLUSTER CREATE CUBE CREATE CUBE BUILD PROCESS CREATE CUBE DIMENSION CREATE DATABASE LINK CREATE DIMENSION CREATE EVALUATION CONTEXT CREATE EXTERNAL JOB CREATE INDEXTYPE CREATE JOB CREATE LIBRARY CREATE MATERIALIZED VIEW CREATE MEASURE FOLDER CREATE MINING MODEL CREATE OPERATOR CREATE PROCEDURE CREATE PROFILE CREATE PUBLIC DATABASE LINK CREATE PUBLIC SYNONYM CREATE ROLE CREATE ROLLBACK SEGMENT CREATE RULE CREATE RULE SET CREATE SEQUENCE CREATE SESSION CREATE SYNONYM CREATE TABLE CREATE TABLESPACE CREATE TRIGGER CREATE TYPE CREATE USER CREATE VIEW CREATE_ANY_WORKSPACE DEBUG ANY PROCEDURE DEBUG CONNECT SESSION DELETE ANY CUBE DIMENSION DELETE ANY MEASURE FOLDER DELETE ANY TABLE DEQUEUE ANY QUEUE DROP ANY ASSEMBLY DROP ANY CLUSTER DROP ANY CONTEXT DROP ANY CUBE DROP ANY CUBE BUILD PROCESS DROP ANY CUBE DIMENSION DROP ANY DIMENSION DROP ANY DIRECTORY DROP ANY EDITION DROP ANY EVALUATION CONTEXT DROP ANY INDEX DROP ANY INDEXTYPE DROP ANY LIBRARY DROP ANY MATERIALIZED VIEW DROP ANY MEASURE FOLDER DROP ANY MINING MODEL DROP ANY OPERATOR DROP ANY OUTLINE DROP ANY PROCEDURE DROP ANY ROLE DROP ANY RULE DROP ANY RULE SET DROP ANY SEQUENCE DROP ANY SQL PROFILE DROP ANY SYNONYM DROP ANY TABLE DROP ANY TRIGGER DROP ANY TYPE DROP ANY VIEW DROP PROFILE DROP PUBLIC DATABASE LINK DROP PUBLIC SYNONYM DROP ROLLBACK SEGMENT DROP TABLESPACE DROP USER ENQUEUE ANY QUEUE EXECUTE ANY ASSEMBLY EXECUTE ANY CLASS EXECUTE ANY EVALUATION CONTEXT EXECUTE ANY INDEXTYPE EXECUTE ANY LIBRARY EXECUTE ANY OPERATOR EXECUTE ANY PROCEDURE EXECUTE ANY PROGRAM EXECUTE ANY RULE EXECUTE ANY RULE SET EXECUTE ANY TYPE EXECUTE ASSEMBLY EXPORT FULL DATABASE FLASHBACK ANY TABLE FLASHBACK ARCHIVE ADMINISTER FORCE ANY TRANSACTION FORCE TRANSACTION FREEZE_ANY_WORKSPACE GLOBAL QUERY REWRITE GRANT ANY OBJECT PRIVILEGE GRANT ANY PRIVILEGE GRANT ANY ROLE IMPORT FULL DATABASE INSERT ANY CUBE DIMENSION INSERT ANY MEASURE FOLDER INSERT ANY TABLE LOCK ANY TABLE MANAGE ANY FILE GROUP MANAGE ANY QUEUE MANAGE FILE GROUP MANAGE SCHEDULER MANAGE TABLESPACE MERGE ANY VIEW MERGE_ANY_WORKSPACE ON COMMIT REFRESH QUERY REWRITE READ ANY FILE GROUP REMOVE_ANY_WORKSPACE RESTRICTED SESSION RESUMABLE ROLLBACK_ANY_WORKSPACE SELECT ANY CUBE SELECT ANY CUBE DIMENSION SELECT ANY DICTIONARY SELECT ANY MINING MODEL SELECT ANY SEQUENCE SELECT ANY TABLE SELECT ANY TRANSACTION UNDER ANY TABLE UNDER ANY TYPE UNDER ANY VIEW UNLIMITED TABLESPACE UPDATE ANY CUBE UPDATE ANY CUBE BUILD PROCESS UPDATE ANY CUBE DIMENSION UPDATE ANY TABLE Object Privileges: <Empty> Add: Clase Java Clases de Trabajos Cola Columna de Tabla Columna de Vista Espacio de Trabajo Función Instantánea Origen Java Paquete Planificaciones Procedimiento Programas Secuencia Sinónimo Tabla Tipos Trabajos Vista Consumer Group Privileges: <Empty> Default Consumer Group: (*) None Edit List: AUTO_TASK_CONSUMER_GROUP BATCH_GROUP DEFAULT_CONSUMER_GROUP INTERACTIVE_GROUP LOW_GROUP ORA$AUTOTASK_HEALTH_GROUP ORA$AUTOTASK_MEDIUM_GROUP ORA$AUTOTASK_SPACE_GROUP ORA$AUTOTASK_SQL_GROUP ORA$AUTOTASK_STATS_GROUP ORA$AUTOTASK_URGENT_GROUP ORA$DIAGNOSTICS SYS_GROUP And, of course, I wonder what options I should pick.

    Read the article

  • Ubuntu 11 and 12 initially fast but later bogs down, CPU pegged

    - by uos??
    I started with Ubuntu 11 a few weeks ago. It's on a DELL M4300 with a OCZ SSD. Default setup, except that I've installed the proprietary NVIDIA graphics and BROADCOM wireless drivers. Dual boot with Windows. If I cold boot into Ubuntu, it is very fast, just like the Windows experience that I'm used to. But SOMETHING happens, and I haven't yet determined what, but the system gets incredibly slow and stays that way. At first I thought it had to do with Adobe Flash because it seemed to be triggered by sites with Flash. But then I removed Flash and the problem remains. I thought it was just an overheating problem, but I've now upgraded to 12.04 which supposedly fixes the overheating problems I've read about. Perhaps the heat situation was brought on by Flash in my early cases? So I installed Jupiter for CPU management, but the thermometer reports a familiar Windows-side temperature of 53 degrees Celsius. Switching Jupiter to lower performance doesn't help. When I check the System Monitor application, sorting by CPU usage, there are no obvious problem processes. However, in the graphs tab, both CPU cores are pegged at 100%! I notice that the slowness seems to be similar to the extremely bad performance I got prior to installing the NVIDIA drivers. I'm not sure if that helps. This is the strangest part to me - although the temperature seems OK, even after rebooting, the system remains slow - starting with GRUB2 which is very noticeably delayed, all the way through to either Ubuntu or Windows! That's right, even the Windows side suffers effects and takes several minutes to complete booting whereas normally (with my SSD) it's ready to use in 15 seconds. The only way to fix it is to shutdown and let the parts cool down. Or maybe it just needs to completely power off and boot rather than a soft reboot, temperature has nothing to do with it? - is that possible? But know that I have never had this problem in Windows, even if Windows gets very hot (135 F) a reboot would be enough time for it to recover. For this reason, I don't think it's a heat thing, but I can't imagine what else could be surviving the reboot. I'm entirely updated - there are no pending updates. I have the Post-Release updates of NVIDIA too, btw. If this sounds CLOSE to something you know about, but one of the details doesn't line up exactly, it might be a mistake in my perception. Are there tests you can suggest to rule something out? Thanks! processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T9500 @ 2.60GHz stepping : 6 microcode : 0x60c cpu MHz : 800.000 cache size : 6144 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 5187.00 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T9500 @ 2.60GHz stepping : 6 microcode : 0x60c cpu MHz : 800.000 cache size : 6144 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 5186.94 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: (Redundant figures removed. You can view them in the edits if they are still relevant) ps: %CPU PID USER COMMAND 9.4 2399 jason gnome-terminal 6.2 2408 jason bash 17.3 1117 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none 13.7 1667 jason compiz 1.3 1960 jason /usr/lib/unity/unity-panel-service 1.3 1697 jason python /usr/bin/jupiter 0.9 1964 jason /usr/lib/indicator-appmenu/hud-service 0.6 1689 jason nautilus -n 0.4 1458 jason //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session I should highlight specifically that GRUB2 can also be very slow. I don't know the relationship of which scenarios GRUB2 is also slow, but WHEN it is slow, it is slow both before the menu appears and after the selection is made - although for the diagnosis of GRUB2 it is harder for me to tell what the normal speeds should be. With SSD, I would expect that GRUB2 could load instantly, and that the GRUB2 purple would disappear instantly after the selection. The only delay to be expected is the change in graphics modes (though I couldn't guess why that ever requires any noticeable time)

    Read the article

  • VSNewFile: A Visual Studio Addin to More Easily Add New Items to a Project

    - by InfinitiesLoop
    My first Visual Studio Add-in! Creating add-ins is pretty simple, once you get used to the CommandBar model it is using, which is apparently a general Office suite extensibility mechanism. Anyway, let me first explain my motivation for this. It started out as an academic exercise, as I have always wanted to dip my feet in a little VS extensibility. But I thought of a legitimate need for an add-in, at least in my personal experience, so it took on new life. But I figured I can’t be the only one who has felt this way, so I decided to publish the add-in, and host it on GitHub (VSNewFile on GitHub) hoping to spur contributions. Adding Files the Built-in Way Here’s the problem I wanted to solve. You’re working on a project, and it’s time to add a new file to the project. Whatever it is – a class, script, html page, aspx page, or what-have-you, you go through a menu or keyboard shortcut to get to the “Add New Item” dialog. Typically, you do it by right-clicking the location where you want the file (the project or a folder of it): This brings up a dialog the contains, well, every conceivable type of item you might want to add. It’s all the available item templates, which can result in anywhere from a ton to a veritable sea of choices. To be fair, this dialog has been revamped in Visual Studio 2010, which organizes it a little better than Visual Studio 2008, and adds a search box. It also loads noticeably faster.   To me, this dialog is just getting in my way. If I want to add a JavaScript script to my project, I don’t want to have to hunt for the script template item in this dialog. Yes, it is categorized, and yes, it now has a search box. But still, all this UI to swim through when all I need is a new file in the project. I will name it. I will provide the content, I don’t even need a ‘template’. VS kind of realizes this. In the add menu in a class library project, for example, there is a “Add Class…” choice. But all this really does is select that project item from the dialog by default. You still must wait for the dialog, see it, and type in a name for the file. How is that really any different than hitting F2 on an existing item? It isn’t. Adding Files the Hack Way What I often find myself doing, just to avoid going through this dialog, is to copy and paste an existing file, rename it, then “CTRL-A, DEL” the content. In a few short keystrokes I’ve got my new file. Even if the original file wasn’t the right type, it doesn’t matter – I will rename it anyway, including the extension. It works well enough if the place I am adding the file to doesn’t have much in it already. But if there are a lot of files at that level, it sucks, because the new file will have the name “Copy of xyz”, causing it to be moved into the ‘C’ section of the alphabetically sorted items, which might be far, far away from the original file (and so I tend to try and copy a file that starts with ‘C’ *evil grin*). Using ‘Export Template’ To be completely fair I should at least mention this feature. I’m not even sure if this is new in VS 2010 or not (I think so). But it allows you to export a project item or items, including potential project references required by it. Then it becomes a new item in the available ‘installed templates’. No doubt this is useful to help bootstrap new projects. But that still requires you to go through the ‘New Item’ dialog. Adding Files with VSNewFile So hopefully I have sufficiently defined the problem and got a few of you to think, “Yeah, me too!”… What VSNewFile does is let you skip the dialog entirely by adding project items directly to the context menu. But it does a bit more than that, so do read on. For example, to add a new class, you can right-click the location and pick that option. A new .cs file is instantly added to the project, and the new item is selected and put into the ‘rename’ mode immediately. The default items available are shown here. But you can customize them. You can also customize the content of each template. To do so, you create a directory in your documents folder, ‘VSNewFile Templates’. In there, you drop the templates you want to use, but you name them in a particular way. For example, here’s a template that will add a new item named “Add TITLE”. It will add a project item named “SOMEFILE.foo” (or ‘SOMEFILE1.foo’ if that exists, etc). The format of the file name is: <ORDER>_<KEY>_<BASE FILENAME>_<ICON ID>_<TITLE>.<EXTENTION> Where: <ORDER> is a number that lets you determine the order of the items in the menu (relative to each other). <KEY> is a case sensitive identifier different for each template item. More on that later. <BASE FILENAME> is the default name of the file, which doesn’t matter that much, since they will be renaming it anyway. <ICON ID> is a number the dictates the icon used for the menu item. There are a huge number of built-in choices. More on that later. <TITLE> is the string that will appear in the menu. And, the contents of the file are the default content for the item (the ‘template’). The content of the file can contain anything you want, of course. But it also supports two tokens: %NAMESPACE% and %FILENAME%, which will be replaced with the corresponding values. Here is the content of this sample: testing Namespace = %NAMESPACE% Filename = %FILENAME% I kind went back and forth on this. I could have made it so there’d be an XML or JSON file that defines the templates, instead of cramming all this data into the filename itself. I like the simplicity of this better. It makes it easy to customize since you can literally just throw these files around, copy them from someone else, etc, without worrying about merge data into a central description file, in whatever format. Here’s our new item showing up: Practical Use One immediate thing I am using this for is to make it easier to add very commonly used scripts to my web projects. For example, uh, say, jQuery? :) All I need to do is drop jQuery-1.4.2.js and jQuery-1.4.2.min.js into the templates folder, provide the order, title, etc, and then instantly, I can now add jQuery to any project I have without even thinking about “where is jQuery? Can I copy it from that other project?”   Using the KEY There are two reasons for the ‘key’ portion of the item. First, it allows you to turn off the built-in, default templates, which are: FILE = Add File (generic, empty file) VB = Add VB Class CS = Add C# Class (includes some basic usings) HTML = Add HTML page (includes basic structure, doctype, etc) JS = Add Script (includes an immediately-invoking function closure) To turn one off, just include a file with the name “_<KEY>”. For example, to turn off all the items except our custom one, you do this: The other reason for the key is that there are new Visual Studio Commands created for each one. This makes it possible to bind a keyboard shortcut to one of them. So you could, for example, have a keyboard combination that adds a new web page to your website, or a new CS class to your class library, etc. Here is our sample item showing up in the keyboard bindings option. Even though the contents of the template directory may change from one launch of Visual Studio to the next, the bindings will remain attached to any item with a particular key, thanks to it taking care not to lose keyboard bindings even though the commands are completely recreated each time. The Icon Face ID Visual Studio uses a Microsoft Office style add-in mechanism, I gather. There are a predetermined set of built-in icons available. You can use your own icons when developing add-ins, of course, but I’m no designer. I just wanted to find appropriate-ish icons for the built-in templates, and allow you to choose from an existing built-in icon for your own. Unfortunately, there isn’t a lot out there on the interwebs that helps you figure out what the built-in types are. There’s an MSDN article that describes at length a way to create a program that lists all the icons. But I don’t want to write a program to figure them out! Just show them to me! Sheesh :) Thankfully, someone out there felt the same way, and uses a novel hack to get the icons to show up in an outlook toolbar. He then painstakingly took screenshots of them, one group at a time. It isn’t complete though – there are tens of thousands of icons. But it’s good enough. If anyone has an exhaustive list, please let me, and the rest of the add-in community know. Icon Face ID Reference Installing the Add-in It will work with Visual Studio 2008 and Visual Studio 2010. Just unzip the release into your Documents\Visual Studio 20xx\Addins folder. It contains the binary and the Visual Studio “.addin” file. For example, the path to mine is: C:\Users\InfinitiesLoop\Documents\Visual Studio 2010\Addins Conclusion So that’s it! I hope you find it as useful as I have. It’s on GitHub, so if you’re into this kind of thing, please do fork it and improve it! Reference: VSNewFile on GitHub VSNewFile release on GitHub Icon Face ID Reference

    Read the article

  • Importing data from text file to specific columns using BULK INSERT

    - by Dinesh Asanka
    Bulk insert is much faster than using other techniques such as  SSIS. However, when you are using bulk insert you can’t insert to specific columns. If, for example, there are five columns in a table you should have five values for each record in the text file you are importing from. This is an issue when you are expecting default values to be inserted into tables. Let us say you have table as below: In this table, you are expecting ID, Status and CreatedDate to be updated automatically, so your text file may only have   FirstName  LastName  values as below: Dinesh,Asanka Saman,Liyanage Ruwan,Silva Susantha,Bathige Jude,Peires Sanjeewa,Jayawickrama If you use bulk insert to this table like follows, You will be returned an error: Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (ID). To avoid this you will need to create a view with the columns you are expecting to fill and use bulk insert against it. If you check the table now, you will see table with values in the text file and the default values.

    Read the article

  • Eee PC - Create USB Recovery Drive w/ Files Copied From Recovery Partition

    - by nedm
    I have an Eee PC 1005HAB whose hard disk has failed. I have no recovery CD/DVD, but I did previously back up the contents of the recovery partition, and would like to use them to create a bootable USB to reinstall the factory settings on the new hard drive. Since I simply copied all the files in the recovery partition, rather than hitting F9 during boot and running through the process to create a recovery disk or drive, how do I now use the files to create a bootable USB drive that will do the recovery? In the BIOS I have disabled boot booster and set external drives to the top of the boot priority, but simply copying all the recovery partion files to a usb doesn't allow it to be booted from. I've downloaded the HP utility for creating bootable USB drives and have tried using it to make the USB drive bootable, but I'm not sure what to do with the ghost image and utilities from the recovery partition to get the process to start properly. Thanks in advance for any help.

    Read the article

  • unable to load nvidia(bumblebee) in ubuntu 14.04 (only nouveau loads)

    - by Ubuntuser
    Bumblebee stopped working on my system after upgrading to stable version of Ubuntu 14.04. DUring installation I get this error rmmod: ERROR: Module nouveau is in use Setting up bumblebee (3.2.1-90~trustyppa1) ... Selecting 01:00:0 as discrete nvidia card. If this is incorrect, edit the BusID line in /etc/bumblebee/xorg.conf.nouveau . bumblebeed start/running, process 11133 Processing triggers for initramfs-tools (0.103ubuntu4.1) ... update-initramfs: Generating /boot/initrd.img-3.14.1-031401-generic Setting up bumblebee-nvidia (3.2.1-90~trustyppa1) ... Selecting 01:00:0 as discrete nvidia card. If this is incorrect, edit the BusID line in /etc/bumblebee/xorg.conf.nvidia rmmod: ERROR: Module nouveau is in use bumblebeed start/running, process 18284 It says nouveau is in use. I checked the loaded modules lsmod | grep nouveau nouveau 1097199 1 mxm_wmi 13021 1 nouveau ttm 85115 1 nouveau i2c_algo_bit 13413 2 i915,nouveau drm_kms_helper 52758 2 i915,nouveau drm 302817 7 ttm,i915,drm_kms_helper,nouveau wmi 19177 3 dell_wmi,mxm_wmi,nouveau video 19476 2 i915,nouveau However, I have nouveau in my blacklist cat /etc/modprobe.d/blacklist.conf | grep nouveau blacklist nouveau blacklist lbm-nouveau alias nouveau off alias lbm-nouveau off My grub is also set to nomodeset cat /etc/default/grub | grep nomodeset GRUB_CMDLINE_LINUX_DEFAULT="nomodeset quiet splash" My graphics card is nvidia optimus lspci | grep -i vga 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18) 01:00.0 VGA compatible controller: NVIDIA Corporation GT218M [GeForce 310M] (rev ff) I've raised a bug in launchpad: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1327598 Note: Nvidia-prime is working for me (partially). Frequent mouse locks. Interestingly, bumblebee works perfectly fine on my fedora 20 partition on this same laptop.

    Read the article

  • San Joaquin County, California Wins AIIM 2012 Carl E. Nelson Best Practice Award

    - by Peggy Chen
    Last month, AIIM, the global community of information professionals, announced the winners of the 2012 Carl E. Nelson Best Practices Awards. And San Joaquin County, California won in the small company category for 1-100 employees. The Carl E. Nelson Best Practices Award was established to recognize excellence in the area of information management. "Best practice" denotes a standard of excellence that has been achieved with an organization and refers to a process that can be quantified, adapted and repeated. Like many counties, San Joaquin County, California, was faced with huge challenges due to decreasing funds and staff, including decreased cost of building capability. It needed to streamline processes, cut costs per activity, modernize and strengthen the infrastructure, and adopt new technology and standards such as the National Information Exchange Model (NIEM). The Integrated Justice Information System (IJIS) provides a Web-based system to link more than 650,000 residents, 18 agencies countywide and other law enforcement systems nationwide. The county’s modernization initiative focused on replacing its outdated warrant system, implementing service-oriented architecture (SOA) to simplify integration between county law and justice systems, deploying Business Process Management (BPM), Case Management with content management, and Web technologies from Oracle. A critical part of their success has been the proper alignment of our Strategic Vision to the way the organization was enabled to plan and execute (and continues to execute) their modernization project. Congratulations to San Joaquin County!

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >