Search Results

Search found 8725 results on 349 pages for 'beginning steps'.

Page 186/349 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • How to install Canon MP610 printer on Ubuntu 12.04 x64

    - by Arkadius
    I installed Ubuntu 12.04 x64. Orginal Canon drivers are only for 32-bit version. How can I install this printer in 64-bit version ? Arkadius HERE IS SOLUTION I looked for solution some time and finally found it. First I try to do it by adding repository like it is written here: http://www.iheartubuntu.com/2012/02/install-canon-printer-for-ubuntu-linux.html BUT it did NOT work. Printer was installed but every print JOB goes somewhere ( probably to /dev/null :) ) Also installing sudo apt-get install ia32-libs did NOT worked (it was already installed) Finally I found solution. NOTE I did NOT use orginal Canon drivers for 32-bit. I also removed drivers from repository: ppa:michael-gruz/canon I found solution almost at the end of this thread: http://ubuntuforums.org/showthread.php?t=1967725&page=10 Most important hint was found in Response #97 "Do NOT install any PPA" I did as follows: Removed all copies of my printer Removed Canon drivers from repository ppa:michael-gruz/canon sudo apt-get remove cnijfilter* Added new repository and installed CUPS for Canon: sudo apt-add-repository ppa:robbiew/cups-bjnp sudo apt-get update sudo apt-get install cups-bjnp Installed Gutenprint: sudo apt-get install printer-driver-gutenprint Restarted CUPS: sudo restart cups Add myself to group lp: sudo usermod -G lp -a your_user_name Added printer usings steps from link above: Don't install any PPA for the drivers. Click the Cog up in the right-hand corner and select Printers. Turn on the printer and make sure it is connected. When the Printers windows appears, click +Add and wait a few minutes. Your printer should appear within the configuration wizard. Mine did and its an Canon MX330. Click the defaults and continue on. Cups should identify your printer. I saw a few other models in the list. I was able to successfully print a test page afterwards. I hope this will also help someone else. Arkadius .

    Read the article

  • An Overview of Batch Processing in Java EE 7

    - by Janice J. Heiss
    Up on otn/java is a new article by Oracle senior software engineer Mahesh Kannan, titled “An Overview of Batch Processing in Java EE 7.0,” which explains the new batch processing capabilities provided by JSR 352 in Java EE 7. Kannan explains that “Batch processing is used in many industries for tasks ranging from payroll processing; statement generation; end-of-day jobs such as interest calculation and ETL (extract, load, and transform) in a data warehouse; and many more. Typically, batch processing is bulk-oriented, non-interactive, and long running—and might be data- or computation-intensive. Batch jobs can be run on schedule or initiated on demand. Also, since batch jobs are typically long-running jobs, check-pointing and restarting are common features found in batch jobs.” JSR 352 defines the programming model for batch applications plus a runtime to run and manage batch jobs. The article covers feature highlights, selected APIs, the structure of Job Scheduling Language, and explains some of the key functions of JSR 352 using a simple payroll processing application. The article also describes how developers can run batch applications using GlassFish Server Open Source Edition 4.0. Kannan summarizes the article as follows: “In this article, we saw how to write, package, and run simple batch applications that use chunk-style steps. We also saw how the checkpoint feature of the batch runtime allows for the easy restart of failed batch jobs. Yet, we have barely scratched the surface of JSR 352. With the full set of Java EE components and features at your disposal, including servlets, EJB beans, CDI beans, EJB automatic timers, and so on, feature-rich batch applications can be written fairly easily.” Check out the article here.

    Read the article

  • deWitters Game loop in libgdx(Android)

    - by jaysingh
    I am a beginner and I want a complete example in LibGDX for android(Fixed time game loop) how to limit the framerate to 50 or 60. Also how to mangae interpolation between game state with simple example code e.g. deWiTTERS Game Loop: @Override public void render() { float deltaTime = Gdx.graphics.getDeltaTime(); Update(deltaTime); Render(deltaTime); } libgdx comments:- There is a Gdx.graphics.setVsync() method (generic = backend-independant), but it is not present in 0.9.1, only in the Nightlies. "Relying on vsync for fixed time steps is a REALLY bad idea. It will break on almost all hardware out there. See LwjglApplicationConfiguration, there's a flag in there that let s use toggle gpu/software vsynching. Play around with it." (Mario) NOTE that none of these limit the framerate to a specific value... if you REALLY need to limit the framerate for some reason, you'll have to handle it yourself by returning from render calls if xxx ms haven't passed since the last render call. li

    Read the article

  • Oracle SOA Suite customer panel: Successful Application Integration & SOA Projects

    - by Simone Geib
    At the recent SOA Suite customer panel, Roger Brown from UNS Energy, Fabio Ravagni from Cencosud and Paras Jain from Cisco discussed their recent SOA Suite implementations, business drivers and challenges, architecture and lessons learned. Roger started by describing how UNS redesigned their internet portal to improve their customer experience and reduce manual steps in their business processes. Through the use of Oracle Service Bus, Oracle BPEL Process Manager and Oracle Business Activity Monitoring, they provided more self-service functionality, automated their business processes and increased the use of their web site by 12.98% for number of visits and 33.58% for average visit duration. The screenshot below shows the UNS architecture: > Next Fabio described the challenges Cencosud faced through continuous expansion of their business, different standards and levels of expertise and large volumes of information. By introducing Oracle SOA Suite, Oracle Data Integrator and Oracle Enterprise Repository, and with the help of Oracle Consulting, they significantly simplified their integration model, reduced their maintenance effort and increased their integration governance. The picture below shows the implemented solution with so far more than 400 services in production and more than 20 ongoing projects, which will make use of the new integration platform. > Last, but not least, Paras discussed the challenges the Webex division of Cisco faced with a highly manual service fulfillment process, multiple data sources and the resulting large room for errror and delay in customer time-to-service. Through a redesign of their order fulfillment process and the introduction of Oracle SOA Suite (see below), they significantly improved their SLAs, eliminated duplicate orders, provided higher visibility into the order process and aligned business and IT. For more information about Oracle OpenWorld SOA & BPM Session, please see the Focus on SOA and BPM document

    Read the article

  • How do I make complex SQL queries easier to write?

    - by DragonLord
    I'm finding it very difficult to write complex SQL queries involving joins across many (at least 3-4) tables and involving several nested conditions. The queries I'm being asked to write are easily described by a few sentences, but can require a deceptive amount of code to complete. I'm finding myself often using temporary views to write these queries, which seem like a bit of a crutch. What tips can you provide that I can use to make these complex queries easier? More specifically, how do I break these queries down into the steps I need to use to actually write the SQL code? Note that I'm the SQL I'm being asked to write is part of homework assignments for a database course, so I don't want software that will do the work for me. I want to actually understand the code I'm writing. More technical details: The database is hosted on a PostgreSQL server running on the local machine. The database is very small: there are no more than seven tables and the largest table has less than about 50 rows. The SQL queries are being passed unchanged to the server, via LibreOffice Base.

    Read the article

  • Problems Dual Boot

    - by user104108
    A few months I decided to install Ubuntu 12.04 on my PC alongside with my Windows 7 partition. In order to do that and avoid any mistake, I followed the steps of these tutorial: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/2/ Everything was going well until I decided to update to the 12.10 realese. I don't know what happened, but after I updated my Ubuntu, it stoped working, it didn't even launched, when I turned on my pc and choose to run "Ubuntu 12.04" on the Grub Screen, a weird messaged appeared. Well, so I decided to install the Ubuntu 12.10 and forget about the 12.04 partition, no problem. I erased the partitions used for the Ubuntu 12.04 with EaseUS partition Manager. However, when I start my PC, there is still the option of "Ubuntu 12.04" to chose, is that bad? And what about now, can I use the Windows Installer of Ubuntu ( http://www.ubuntu.com/download/help/install-ubuntu-with-windows ) to install the Ubuntu 12.10 ? What should I do to have Ubuntu 12.10 and Windows 7 in dual boot again? Thanks; Thales.

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • Problems after bumblebee installation

    - by Samuel
    I tried to install bumblebee on Ubuntu 12.04 LTS by following steps on ubuntuwiki site. But when i used this code: sudo add-apt-repository ppa:bumblebee/stable && sudo apt-get update this output came out: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.q0zzLiXVT3 --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 46C0364A882F14F899448FFCB22A95F88110A93A gpg: requesting key 8110A93A from hkp server keyserver.ubuntu.com gpg: key 8110A93A: "Launchpad PPA for Bumlebee Project" not changed gpg: Total number processed: 1 gpg: unchanged: 1 E: Type 'ain' is not known on line 3 in source list /etc/apt/sources.list.d/bumblebee-stable-precise.list E: The list of sources could not be read. There´s also the same problem message when I try to run the update center. ´E:Type´ain´ is not known on line 3 in source list /etc/apt/sources.list.d/bumblebee-stable-precise.list, E:The list of sources could not be read., E:The package lists or status file could not be parsed or opened.´ I don´t know what to do since I´m a newbie at Linux. Thanks in advance, Samuel.

    Read the article

  • How to display workflow related tasks in the item display page where the workflow is currently running on in SharePoint2013

    - by ybbest
    In one of the project, I need to display workflow related tasks in the item display page where the workflow is currently running on. To achieve this, I’d like to add the tasks list view web part and using the connected web part to achieve this.(ID=workflowitemid) However, to make it work I need to unhide the workflowitemid field in the task list, as it is hidden field and also cantogglehidden field is set to false. I need to use reflection to change the cantogglehidden field to true as it only has getter in the API and then I am able to unhide the field. You can download the script here. However, it is not ideal (make your environment not supported by Microsoft) to display tasks this way. Another way to display the related task is to use SharePoint designer solution with List view web part and data source. Here are the steps. 1. Create a new list display form as below 2. Edit the custom display form in advanced mode. 3. Find the PlaceHolderMain contentplace hoder and insert the DataView by choosing the associated workflow tasks list as below 4. Go to the List View Tools >> OPTIONS 5. Create a Parameter called workflowitemId Parameter which retrieve the value from the ID querystring as below 6. Create a filter based on UIVersion = workflowitemId as below ,we are going to change the UIVersion to WorkflowItemId property later as WorkflowItemId is a hidden field and cannot be selected from the wizard. 7. Replace UIVersion with WorkflowItemId in the caml for the XsltListViewWebPart. From: TO 8. Go to the new custom display page at http://yourserver/Lists/aa/CustomDisplayPage.aspx?ID=414, you will see the associated tasks are showing in the page. References: http://office.microsoft.com/en-us/sharepoint-designer-help/watch-this-design-a-document-review-workflow-solution-HA010256417.aspx (Video 12 and 13)

    Read the article

  • WPF or WinForms for Game Development and learning resources?

    - by Stephen Lee Parker
    I'm looking to create a game framework for my own personal use... I want to use WPF, but I'm unsure if that is a wise choice... The games I will be writing should not require high performance graphics, so I am hoping to build on native classes... I do not want to rely on external DLL's unless I generate them myself. The games will be for young children, say 4 to 8. Most will be learning puzzles or simple shooters. The most advanced will be a platform game (non-scrolling screen like the old Atari Miner 2049er game). I think I know how to write something like the old Atari Chopper Command (partially written and my 4 year old loves it, but I used WinForms and GDI), Pac-Man, Tetris, Astroids, Space Invaders, Slider Puzzle, but I do not really know how to write the platform game... In my mind, I'm getting caught in collision detection and how to make a character jump and how to make a character walk up a slope or steps... Can anyone point me to information on developing a platform game in C#? Would you suggest WinForms or WPF for game development? I'm not looking for great graphics and speed, just entertaining game play...

    Read the article

  • JDBC Connection Pools in Glassfish

    - by Dana Singleterry
    I've been attempting to configure Glassfish 3.1.2.2 for ADF 11g and the need arose to create a jdbc connection pool to my Oracle XE 11g database. While this is really very trivial there were no samples of how to do this and documentation, while good, rarely ever provides concrete examples. After fumbling around for a few minutes searching for an example I gave up and figured it out on my own. Here are the steps for any of you that may be in need. This can be done either via the Glassfish command line tool asadmin or through the admin console. I'm doing this through the admin console. Start Glassfish and connect to the admin console with the credentials you defined at installation: http://localhost:4848 Navigate to Resources | JDBC | JDBC Connection Pools and select New. Be sure to enter Resource Type & Datasource Classname under General Settings tab. You can go with the defaults for Pool Settings etc... View Image Go to the Additional Properties tab and create username, password, and url properties with the respective values. View Image Navigate to Resources | JDBC | JDBC Resources and select New. Be sure to enter the JNDI Name and select the Pool Name for the jdbc connection pool you created previously. View Image Navigate to Configurations | server-config | JVM Settings and select the JVM Options tab. Add the values highlighted: -Doracle.jdbc.J2EE13Compliant=true is used to make sure the driver behaves in a JEE-compliant manner. View Image To integrate the JDBC driver into a GlassFish Server domain, copy the JAR files into the domain-dir/lib directory, then restart the server. The JAR file for the Oracle 11 database driver is ojdbc6dms.jar. An upcoming entry will demonstrate configuring Glassfish for Oracle ADF Applications.

    Read the article

  • Change Data Capture

    - by Ricardo Peres
    There's an hidden gem in SQL Server 2008: Change Data Capture (CDC). Using CDC we get full audit capabilities with absolutely no implementation code: we can see all changes made to a specific table, including the old and new values! You can only use CDC in SQL Server 2008 Standard or Enterprise, Express edition is not supported. Here are the steps you need to take, just remember SQL Agent must be running: use SomeDatabase; -- first create a table CREATE TABLE Author ( ID INT NOT NULL PRIMARY KEY IDENTITY(1, 1), Name NVARCHAR(20) NOT NULL, EMail NVARCHAR(50) NOT NULL, Birthday DATE NOT NULL ) -- enable CDC at the DB level EXEC sys.sp_cdc_enable_db -- check CDC is enabled for the current DB SELECT name, is_cdc_enabled FROM sys.databases WHERE name = 'SomeDatabase' -- enable CDC for table Author, all columns exec sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'Author', @role_name = null -- insert values into table Author insert into Author (Name, EMail, Birthday, Username) values ('Bla', 'bla@bla', 1990-10-10, 'bla') -- check CDC data for table Author -- __$operation: 1 = DELETE, 2 = INSERT, 3 = BEFORE UPDATE 4 = AFTER UPDATE -- __$start_lsn: operation timestamp select * from cdc.dbo_author_CT -- update table Author update Author set EMail = '[email protected]' where Name = 'Bla' -- check CDC data for table Author select * from cdc.dbo_author_CT -- delete from table Author delete from Author -- check CDC data for table Author select * from cdc.dbo_author_CT -- disable CDC for table Author -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_table @source_schema = 'dbo', @source_name = 'Author', @capture_instance = 'dbo_Author' -- disable CDC for the entire DB -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_db SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.all();

    Read the article

  • hdmi AC-3 audio broke after upgrading from 11.10 to 12.04.3

    - by Jim LastName
    I just updated my MythBuntu 11.10 to 12.04.3. Now, when I try to play 5.1 content (ripped DVD), my TV (and receiver) plays a "chattering" sound. I check my receiver and the digital dolby light isn't on--it's in PCM mode. So, either the audio is getting sent as AC-3, but the TV and receiver think it's PCM or the AC-3 audio got converted to multichannel PCM and they can't handle it. My setup: hdmi cable from htpc to TV. TV has an s/pdif output to my receiver. I know TV sends AC-3 audio out correctly because I see digital dolby light come on when I view a digital TV channel and PCM come on when I view an old analog channel. I can connect s/pdif from my htpc to my receiver and the digital dolby light comes on and it can decode the audio just fine. It's just not sending it right over hdmi. Now for some hints to the issue: I noticed in MythTV audio setup when I select alsa:hdmi.... the description only lists 2 channel PCM audio capability. speaker-test -Dhdmi:PCH -c6 errors about a bad channel count (only -c2 works). Finally, I tried vlc and it does the same chattering sound. These all make me think this isn't a MythTV issue, it's something lower than that. I think the best way to troubleshoot this is to start at the drivers and check each layer, one at a time all the way to alsa. I just don't know what the layers are and how to do it. So, I need to find some audio troubleshooting guide to assist me. Or, if one doesn't exist, I'd appreciate some steps. Thanks much, Jim

    Read the article

  • CPanel - Wild card SSL - How to point *.domain.com to one root and sub.domain.com to another root

    - by Harry Muscle
    I have a wildcard (*.domain.com) SSL certificate installed on my CPanel server. I have domain.com configured to point to /domain.com as its document root and use this wildcard SSL certificate. I also have sub.domain.com configured to point to /sub.domain.com as its document root. Btw, I have not explicitly configured configured sub.domain.com to use the wildcard SSL certificate. When I go to "http://sub.domain.com" it goes to the correct document root, however my problem is that when I go to "https://sub.domain.com" it goes to the incorrect root, it goes to the root configured for the wildcard SSL. I've been trying to find information on how to go about configuring sub.domain.com to use the SSL certificate and go to the correct document root, however, so far I haven't found anything concrete. Do I use the same steps that I used for configuring the certificate for domain.com, but use the same certificate again and specify dev.domain.com as the domain that this certificate is for (instead of *.domain.com)? Or is there something else I should be doing? This is a production server, so I don't want to play around too much. I'm hoping to find the correct information before proceeding.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 28 (sys.dm_db_stats_properties)

    - by Tamarick Hill
    The sys.dm_db_stats_properties Dynamic Management Function returns information about the statistics that are currently on your database objects. This function takes two parameters, an object_id and a stats_id. Let’s have a look at the result set from this function against the AdventureWorks2012.Sales.SalesOrderHeader table. To obtain the object_id and stats_id I will use a CROSS APPLY with the sys.stats system table. SELECT sp.* FROM sys.stats s CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.Stats_id) sp WHERE sp.object_id = object_id('Sales.SalesOrderHeader') The first two columns returned by this function are the object_id and the stats_id columns. The next column, ‘last_updated’, gives you the date and the time that a particular statistic was last updated. The next column, ‘rows’, gives you the total number of rows in the table as of the last statistic update date. The ‘rows_sampled’ column gives you the number of rows that were sampled to create the statistic. The ‘steps’ column represents the number of specific value ranges from the statistic histogram. The ‘unfiltered_rows’ column represents the number of rows before any filters are applied. If a particular statistic is not filtered, the ‘unfiltered_rows’ column will always equal the ‘rows’ column. Lastly we have the ‘modification_counter’ column which represents the number of modification to the leading column in a given statistic since the last time the statistic was updated. Probably the most important column from this Dynamic Management Function is the ‘last_updated’ column. You want to always ensure that you have accurate and updated statistics on your database objects. Accurate statistics are vital for the query optimizer to generate efficient and reliable query execution plans. Without accurate and updated statistics, the performance of your SQL Server would likely suffer. For more information about this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/jj553546.aspx Folllow me on Twitter @PrimeTimeDBA

    Read the article

  • E-Business Tax Release 12 Setup - US Location Based Taxes Part 1, Prerequisities & Regimes

    - by Robert Story
    Upcoming WebcastTitle: E-Business Tax Release 12 Setup - US Location Based Taxes Part 1, Prerequisities & RegimesDate: April 28, 2010 Time: 12:00 pm EDT Product Family: Receivables Community Summary This one-hour session is part one of two on setting up a fresh implementation of US Location Based Taxes in Oracle E-Business Tax.  It is recommended for functional users who wish to understand the steps involved in setting up E-Business Tax in Release 12. Topics will include: Overview of E-Business TaxLocation setupRegime to Rate FlowTax RegimesTaxesTax StatusesTax JurisdictionsTax Recovery RatesTax RatesSubscribing the Operation Unit to a Regime to Rate FlowBrief Demonstration A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Could not calculate upgrade from Maverick Meerkat to Natty Narwhal

    - by xralf
    I upgraded from Ubuntu Lucid Lynx to Maverick Meerkat with the following commands: sudo apt-get update && sudo apt-get upgrade sudo apt-get install update-manager-core sudo vi /etc/update-manager/release-upgrades and changed the last line to Prompt=normal sudo do-release-upgrade -d This upgrade was OK. I decided to repeat the same steps and to upgrade Maverick Meerkat to Natty Narwhal. It ended with this message: Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: Can not mark 'xubuntu-desktop' for upgrade This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Nov 21 09:37:21 2011) === === Command terminated with exit status 1 (Mon Nov 21 09:37:21 2011) === How can I correct it?

    Read the article

  • Become an Oracle Solaris 11 Certified Implementation Specialist!

    - by uwes
    Have you heard about one of the newest certifications from Oracle, the Oracle Solaris 11 Certified Implementation Specialist? If you already have a background in Oracle Solaris, have some previous UNIX knowledge, or are working with or for an Oracle Partner that’s pursuing Oracle Solaris 11 Specialization, then you may be interested in the many different ways to gain this highly valued industry certification. An Oracle Certified Implementation Specialist is recognized as capable of installing, configuring, and implementing Oracle Solaris 11 on enterprise class SPARC and x86 systems. This certification is highly valued by Oracle customers and partners alike, since you will have obtained an updated skill set on the newest and most powerful operating system release from Oracle which will set your company apart. If you’ve already achieved an industry certification in Solaris then you’re just a few steps away from becoming an Oracle Solaris 11 Certified Implementation Specialist. Also, if you’re new to Oracle Solaris, we have a path for you too. Listed below are some of the many options Oracle offers in delivering training the way you need it to help you achieve your goal of being recognized as an Oracle Solaris 11 Implementation Specialist. Which path best describes you? New to UNIX but want/need to achieve Certified status? Training Paths: Oracle Certified Associate, Oracle Solaris 11 System Administrator Exam: 1Z0-821 – Oracle Solaris 11 System Administration Certified on an earlier version of Solaris and want full Administration Certification? Recommended Training class: Transition to Oracle Solaris 11 Exam: 1Z0-820 – Transition to Oracle Solaris 11 Certified on an earlier version of Solaris and want the partner based Implementation Certification? Recommended Training Path: OPN Guided Learning Path Exam: 1Z0-580 – Oracle Solaris 11 Installation and Configuration Essentials Get Started Today!

    Read the article

  • wired connection not working in ubuntu 12.04 on lenovo G580 laptop

    - by shravankumar
    I found solution in http://www.zyxware.com/articles/2680/solved-wired-connection-eth0-not-detected-in-ubuntu-12-04 I downloaded compact-wireless-2012-07-03-p.tar.bz2 Here the steps i followed along with output 1. shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ scripts/driver-select alx Output: Processing new driver-select request... Backup exists: Makefile.bk Backup exists: Makefile.bk Backup exists: drivers/net/ethernet/broadcom/Makefile.bk Backup exists: drivers/net/ethernet/atheros/Makefile.bk Backup exists: Makefile.bk Backup exists: Makefile.bk Backup exists: drivers/net/ethernet/broadcom/Makefile.bk 2.shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ make output: make -C /lib/modules/3.2.0-23-generic/build M=/home/shravankumar/Desktop/compat-wireless-2012-07-03-p modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-23-generic' scripts/Makefile.build:44: /home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx/Makefile: No such file or directory make[4]: *** No rule to make target `/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx/Makefile'. Stop. make[3]: *** [/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx] Error 2 make[2]: *** [/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros] Error 2 make[1]: *** [_module_/home/shravankumar/Desktop/compat-wireless-2012-07-03-p] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-23-generic' make: *** [modules] Error 2 3. hravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ make install output: FATAL: Could not open /lib/modules/3.2.0-23-generic/modules.dep.temp for writing: Permission denied make: *** [uninstall] Error 1 4. shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ modeprobe alx output: No command 'modeprobe' found, did you mean: Command 'modprobe' from package 'module-init-tools' (main) modeprobe: command not found I am new to Ubuntu ,Please help me. Thanks in advance

    Read the article

  • Writing the correct value in the depth buffer when using ray-casting

    - by hidayat
    I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position. Vertex shader world_coordinate_ = gl_Vertex; Fragment shader vec3 direction = (world_coordinate_.xyz - cameraPosition_); direction = normalize(direction); for (float k = 0.0; k < steps; k += 1.0) { .... pos += direction*delta_step; float thisLum = texture3D(texture3_, pos).r; if(thisLum > surface_) ... } Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value of pos in the 3d texture to be written. So lets say the cube is placed 10 away from origin in -z and the size is 10*10*10. My solution that does not work correctly is this: pos *= 10; pos.z += 10; pos.z *= -1; vec4 depth_vec = gl_ProjectionMatrix * vec4(pos.xyz, 1.0); float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5; gl_FragDepth = depth;

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • How To Attach Visual Studio 2010 To IIS Process Running On Windows 7

    - by Gopinath
    Debugging ASP.NET application hosted on IIS 7 running of Windows 7 using Visual Studio 2010 is a bit different from debugging applications hosted on IIS running on Windows XP. The key differences are Visual Studio 2010 demands for administrator mode and IIS runs under the process name w3wp.exe instead of aspnet_wp.exe. Here are the detailed steps to attach Visual Studio 2010 debugger to IIS 7 on Windows 7. 1. Launch Visual Studio 2010 in Administrator mode(right click on Visual Studio Icon and choose the option Run as Administrator) 2. Open source code the site you want to debug 3. Go to Tools -> Attach to Process.; Opens up Attach to Process.  4.  In Attach to Process dialog box, check the option Show process in all sessions. 5. Search for the process w3wp.exe, and click on Attach button. 6. Accept the warning messages. That’s is you are done. Visual Studio is now attached with IIS for debugging. Happy coding. This article titled,How To Attach Visual Studio 2010 To IIS Process Running On Windows 7, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • High load (and high temp) with idle processes

    - by Nanne
    I've got a semi-old laptop (toshiba satellite a110-228), that's appointed 'laptop for the kids' by my sister. I've installed ubuntu netbook (10.10) on it because of the lack-of memory and it seems to work fine, accept from some heat-issues. These where never a problem under windows. It looks like I've got something similar to this problem: Load is generally 1 or higher, sometimes its stuck at 0.80, but its way to high. Top/htop only show a couple of percentage CPU use (which isn't too shocking, as i'm not doing anything). At this point all the software is stock, and i'd like to keep it that way because its supposed to be the easy-to-maintain kids computer. Now I'd like to find out: What could be the cause of the high load? Could it be as this thread implies, some driver, are there other options to check? How could I see what is really keeping the system hot and bothered? How to check what runs, etc etc? I'd like to pinpoint the culprint. further steps to take for debugging? The big bad internet leads me to believe that it might be the graphics drivers. The laptop has an Intel 945M chipset, but that doesn't seem to be one of the problem childs in this manner (I read a lot abotu ATI drivers that need special isntall). I'd not only welcome hints to directly solve this (duh) but also help in starting to debug what is going on. I am really hesitant in installing an older kernel, as I want it to be stock, and easy upgradeable (because I don't live near it, it should run without me ;) ) As an afterthought: to keep the whole thing cooler, can I 'amp up' the fancontrol? Its only going "airplane" mode when the computer is 95 Celcius, which is a tad late for my taste. Top: powertop:

    Read the article

  • How should I start with Lisp?

    - by Gary Rowe
    I've been programming for years now, working my way through various iterations of Blub (BASIC, Assembler, C, C++, Visual Basic, Java, Ruby in no particular order of "Blub-ness") and I'd like to learn Lisp. However, I have a lot of intertia what with limited time (family, full time job etc) and a comfortable happiness with my current Blub (Java). So my question is this, given that I'm someone who would really like to learn Lisp, what would be the initial steps to get a good result that demonstrates the superiority of Lisp in web development? Maybe I'm missing the point, but that's how I would initially see the application of my Lisp knowledge. I'm thinking "use dialect A, use IDE B, follow instructions on page C, question your sanity after monads using counsellor D". I'd just like to know what people here consider to be an optimal set of values for A, B, C and perhaps D. Also some discussion on the relative merit of learning such a powerful language as opposed to, say, becoming a Rails expert. Just to add some more detail, I'll be developing on MacOS (or a Linux VM) - no Windows based approaches will be necessary, thanks. Notes for those just browsing by I'm going to keep this question open for a while so that I can offer feedback on the suggestions after I've been able to explore them. If you happen to be browsing by and feel you have something to add, please do. I would really welcome your feedback. Interesting links Assuming you're coming at Lisp from a Java background, this set of links will get you started quickly. Using Intellij's La Clojure plugin to integrate Lisp (videocast) Lisp for the Web Online version of Practical Common Lisp (c/o Frank Shearar) Land of Lisp a (+ (+ very quirky) game based) way in but makes it all so straightforward

    Read the article

  • ArchBeat Link-o-Rama for 11/14/2011

    - by Bob Rhubart
    InfoQ: Developer-Driven Threat Modeling Threat modeling is critical for assessing and mitigating the security risks in software systems. In this IEEE article, author Danny Dhillon discusses a developer-driven threat modeling approach to identify threats using the dataflow diagrams. Managing the Virtual World | Philip J. Gill "The killer app for virtualization has been server consolidation," says Al Gillen, program vice president for systems software at market research firm International Data Corporation (IDC). Solaris X86 AESNI OpenSSL Engine | Dan Anderson "Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor," says Anderson. WebLogic Access Management | René van Wijk "This post is a continuation of the post WebLogic Identity Management. In this post we will present the steps involved to integrate WebLogic and Oracle Access Manager," says Oracle ACE René van Wijk. OTN Developer Days in the Nordics - Helsinki, Oslo, Stockholm, and Copenhagen OTN Developer days head for the land of the midnight sun. Podcast: Information Integration Part 2/3 In part two of a three-part program, Oracle Information Integration, Migration, and Consolidation authors Jason Williamson, Tom Laszewsk, and Marc Hebert offer examples of some of the most daunting information integration challenges. Measuring the Human Task activity in Oracle BPM | Leon Smiers Leon Smiers discusses using Oracle BPM to get answer to important questions about what's happening with business process. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ- Dec 14 Spend the day with your peers learning from experts in Cloud computing, engineered systems, and Oracle Fusion Middleware. The Heroes of Java: Michael Hüttermann | Markus Eisele Oracle ACE Director Markus Eisele interviews Java Champion Michael Hüttermann on his role, his process, and on why he uses Java.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >