Search Results

Search found 11126 results on 446 pages for 'hardware requirements'.

Page 161/446 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Alternative routers to the Cisco SA 500

    - by Justin
    We are evaluating the Cisco SA 500 router for our new office router. Would anyone recommend another similarly-featured router from another manufacturer? Requirements: - Office of 14 people - We are likely to switch to 14 VOIP phones (Linksys SPA-942) soon - We want to use VPN on the router, if possible, with Windows and Mac users

    Read the article

  • Django | Apache | Deploy website behind SSL

    - by planet260
    So here are my requirements. I have a website built in Django. I deployed it on Apache Ubuntu. Before there was no SSL involved so the deployment was pretty simple. But now the requirements are changed. Now I have to take a few actions like signup and login behind SSL and present the admin panel and other normally via HTTP. By following the this tutorial I have set-up Apache and SSL and generated certificates for SSL communication. But I am not sure how to proceed, ie. how to serve only a few of my actions through SSL. Below is my configuration. The normal actions are working fine but I don't know how to configure SSL calls. WSGIScriptAlias / /home/ubuntu/myproject/src/myproject/wsgi.py WSGIPythonPath /home/ubuntu/myproject/src <VirtualHost *:80> ServerName mydomain.com <Directory /home/ubuntu/myproject/src/myproject> <Files wsgi.py> order deny,allow Allow from all </Files> </Directory> Alias /static/admin/ "/home/ubuntu/myproject/src/static/admin/" <Directory "/home/ubuntu/myproject/src/static/admin/"> Order allow,deny Options Indexes Allow from all IndexOptions FancyIndexing </Directory> <Location "/login"> RewriteEngine on RewriteRule /admin(.*)$ https://mydomain.com/login$1 [L,R=301] </Location> </VirtualHost> <VirtualHost *:443> ServerName mydomain.com SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key <Directory /home/ubuntu/myproject/src/myproject> <Files wsgi.py> order deny,allow Allow from all </Files> </Directory> Alias /static/admin/ "/home/ubuntu/myproject/src/static/admin/" <Directory "/home/ubuntu/myproject/src/static/admin/"> Order allow,deny Options Indexes Allow from all IndexOptions FancyIndexing </Directory> </VirtualHost> Can you please help me out on how to achieve this? What am I doing wrong? I have read a lot of tutorials but honestly I am not really good at configurations. Any help is appreciated.

    Read the article

  • Can you recommend a good replacement for Windows Sound Recorder?

    - by andygrunt
    Can anyone recommend a good, free replacement for Windows 'Sound Recorder'? Requirements: Very quick to load and run Allows saving to a compact sound format (e.g. MP3) Allows long recordings (1+ Hours) Free Runs on Windows XP Would be nice but probably not essential: Saves along the way (in case of crashes) Allows simple edits (trim start and end and maybe remove chunks in the middle) Before anyone suggests it, I'm aware of (and have used) Audacity but I'm really after something as simple and lightweight as possible.

    Read the article

  • Safe to disable compile options for Nginx (when used only as reverse proxy/cache)

    - by Alex
    I have read that I can do this to make a smaller footprint Nginx when used as static content cache/reverse proxy: --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module What other options are safe to disable? SSI, FastCGI? Others? The only requirements for the reverse proxy is to be able to do https and gzip compression. Will disabling all the module really help with footprint and/or performance?

    Read the article

  • How do I override apt-get removing dependent packages?

    - by iainH
    I want to replace postfix with exim4 on my Ubuntu test server to reflect the setup I have on my production server, but apt-get and aptitude (quite understandably) insist on removing several packages that depend upon having a mail stack. However, in this case I am prepared to override apt-get's undoubted good sense as exim should fulfil all the requirements of the dependent packages, providing mail and sendmail functionality for my applications. I don't want to remove the dependent packages as there is months of effort invested and, although backed up, will be a pain to reconstruct properly.

    Read the article

  • Need suggestions for a Sigle Board Linux Computer?

    - by Joernsn
    I need suggestions for a single board computer, with these requirements: Runs linux Wifi (I/O module?) Does not need much computing power. I'm using it for applications like twittering when the coffee at the office is ready etc. I'd like it to run linux, for easy scripting and a full network stack. (preferably pre-installed)

    Read the article

  • Best program to conduct a presentation for investors

    - by the_drow
    Hello, I need a free program that follows those specific requirements: Can load Word/Powerpoint files and let the presenter control them Allows Voice/Video to be transmitted to all participants Is free (recommended but not required) or very cheap Is easy to install Anyone got recommendations for me?

    Read the article

  • is there a way to tail a log from remote server without using any user credentials?

    - by suhprano
    I run a script tailing a log in a remote server, like so: ssh userx@someip tail -f /data/current.log|python2.7 monitorlog.py There are dependencies and service requirements that disallows me to run the script off the remote server. (DB, ACLs, and path to another service is uses) Is there a way I can tail and monitor a log without using the ssh userx@someip? I thought about generating RSA keys but I think you still need a user to ssh.

    Read the article

  • Binary diff/patch for large files on linux?

    - by thejh
    I've got two partition images (A and B) and want to use them to create a patch that I can apply on A on another computer in order to get the new B image without flooding the network. I have the following requirements: works on linux can create diffs can use diffs to patch files can handle binary files can handle large files (a few hundred GB should work) no user interaction required (just a console application) ideally, should be able to read from/write to pipes (so that I can pipe into it from a gzip-compressed file and write to one) Does something like that exist?

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • Architecture choice about representation of collections in Business Objects

    - by Rajarshi
    I have made certain choices in my architecture which I request the community to review and comment. I am breaking up the post in smaller sections to make it easier to understand the context and then suggest/comment. I am sorry that the post is long, but is required to explain the context. What am I building A typical business application where there are application users, security roles, business operation/action rights based on roles and several business modules like Stock Receive, Stock Transfer, Sale Order, Sale Invoice, Sale Return, Stock Audit etc. and several reports. The application is a WinForm application since it has a lot of rich and responsive UI requirements and has to operate in disconnected mode (with a local SQL Server), most of the time. What have I done I have built a framework - nothing to boast about, but just a set of libraries that serves the repetative requirements of my application, e.g. authentication, role based authorization, data access, validation, exception handling, logging, change status tracking, presentation model compliance and reasonable loose coupling between components. No, I have not written everything from scratch, you can say I have consolidated many things together like some concepts from CSLA, Martin Fowler for Presentation Model, blocks from Enterprise Library, Unity etc. to build a set of libraries that will help my developers be productive quickly without having to look up Google for many of the technical requirements. I have tried to keep the framework generic so that it can be used in typical business applications and also tried to follow some best practices that will support the same Business Objects to be used in an ASP.NET MVC environment also. My present architecture serves my objectives well, and have built several modules (on WinForm) without much trouble. The architecture also lent itself well to build some usable prototype on ASP.NET MVC with the same set of business objects, without changing a single line of code. My Dilemma I have used Custom Business Objects since that gives me a clearer OOP representation of the problem scope in my solution scope, and helps me visualize my entire solution as collection of objects with data and behavior rather than having a set of relational data (DataSet) and implement behaviours (business logic, validation) etc. separately. With rich databinding support in .NET 2.0 binding Custom Business Objects to UI was a breeze. Now while building my business objects, I am still in a dilemma about representation of collections in business objects. Currently I am using DataSets to represent collections while I have seen many suggestions to implement custom collections. For example, in my vision, a typical Sale Invoice Object will contain 'Sales Invoice Items' as a collection. Now theoritically, I can accept that the each 'Sales Invoice Item' should have its own behavior along with their data (ItemCode, Name, Qty, Price etc.) but typically managing of Sale Invoice Items in a Sale Invoice is handled by the Sale Invoice Object itself, e.g. adding/removing Items from collection. Additionally, we can also put business logic/rules for the Sales Invoice Items like "Qty should not be greater than the ordered qty", "Price should be max 10% above the price in Sale Order" etc. in the Sale Invoice object itself. With that kind of a vision, I felt that most business object child collections can be managed by the parent itself, including add/remove from collection as well and implementing business logic for the collection items, hence the collection items hold nothing but data. Additionally, typical collections are represented in UI in Grids, where ability to support DataBinding becomes very important for any collection. Implementing a custom collection, in that case would also mean, I have to implement robust DataBinding support as well, for the collection, which is of course time consuming. Now, considering child collection behaviors are implemented in the parent and the need for DataBinding of child collections, I chose DataSet to represent any child collection in my business objects. In the above example of Sale Invoice I will have 'Invoice Number', 'Date', 'Customer' etc. as attributes of the 'Sale Invoice' but 'InvoiceItems' as a DataSet. Of course, when I say DataSet, it is not a vanilla dataset but an extended DataSet that supports business rule validation and the same role based security model of my framework to allow/deny any business operation to rows/columns of the DataSet, automatically. This approach has allowed easier collection management and databinding in my business objects and my developers are able to deliver modules rapidly. Questions Do you feel that the approach is reasonable? Do you see any shortcomings of this approach? I am recently thinking of using 'Typed DataSets' as child collections, for easier representation in code, that will allow me to write 'currentInvoice.InvoiceItems' (for the DataTable) and 'invoiceItem.ProductCode' or 'invoiceItem.Qty', instead of 'drow["ProductCode"].ToString()' or '(int)drow["Qty"]' etc. Does this choice have any demerits? Thank you if you have read so far and a salute if you still have the Energy to answer.

    Read the article

  • New hire expectations... (Am I being unreasonable?)

    - by user295841
    I work for a very small custom software shop. We currently consist me and my boss. My boss is an old FoxPro DOS developer and OOP makes him uncomfortable. He is planning on taking a back seat in the next few years to hopefully enjoy a “partial retirement”. I will be taking over the day to day operations and we are now desperately looking for more help. We tried Monster.com, Dice.com, and others a few years ago when we started our search. We had no success. We have tried outsourcing overseas (total disaster), hiring kids right out of college (mostly a disaster but that’s where I came from), interns (good for them, not so good for us) and hiring laid off “experienced” developers (there was a reason they were laid off). I have heard hiring practices discussed on podcasts, blogs, etc... and have tried a few. The “Fizz Buzz” test was a good one. One kid looked physically ill before he finally gave up. I think my problem is that I have grown so much as a developer since I started here that I now have a high standard. I hear/read very intelligent people podcasts and blogs and I know that there are lots of people out there that can do the job. I don’t want to settle for less than a “good” developer. Perhaps my expectations are unreasonable. I expect any good developer (entry level or experienced) to be billable (at least paying their own wage) in under one month. I expect any good developer to be able to be productive (at least dangerous) in any language or technology with only a few days of research/training. I expect any good developer to be able to take a project from initial customer request to completion with little or no help from others. Am I being unreasonable? What constitutes a valuable developer? What should be expected of an entry level developer? What should be expected of an experienced developer? I realize that everyone is different but there has to be some sort of expectations standard, right? I have been giving the test project below to potential canidates to weed them out. Good idea? Too much? Too little? Please let me know what you think. Thanks. Project ID: T00001 Description: Order Entry System Deadline: 1 Week Scope The scope of this project is to develop a fully function order entry system. Screen/Form design must be user friendly and promote efficient data entry and modification. User experience (Navigation, Screen/Form layouts, Look and Feel…) is at the developer’s discretion. System may be developed using any technologies that conform to the technical and system requirements. Deliverables Complete source code Database setup instructions (Scripts or restorable backup) Application installation instructions (Installer or installation procedure) Any necessary documentation Technical Requirements Server Platform – Windows XP / Windows Server 2003 / SBS Client Platform – Windows XP Web Browser (If applicable) – IE 8 Database – At developer’s discretion (Must be a relational SQL database.) Language – At developer’s discretion All data must be normalized. (+) All data must maintain referential integrity. (++) All data must be indexed for optimal performance. System must handle concurrency. System Requirements Customer Maintenance Customer records must have unique ID. Customer data will include Name, Address, Phone, etc. User must be able to perform all CRUD (Create, Read, Update, and Delete) operations on the Customer table. User must be able to enter a specific Customer ID to edit. User must be able to pull up a sortable/queryable search grid/utility to find a customer to edit. Validation must be performed prior to database commit. Customer record cannot be deleted if the customer has an order in the system. (++) Inventory Maintenance Part records must have unique ID. Part data will include Description, Price, UOM (Unit of Measure), etc. User must be able to perform all CRUD operations on the part table. User must be able to enter a specific Part ID to edit. User must be able to pull up a sortable/queryable search grid/utility to find a part to edit. Validation must be performed prior to database commit. Part record cannot be deleted if the part has been used in an order. (++) Order Entry Order records must have a unique auto-incrementing key (Order Number). Order data must be split into a header/detail structure. (+) Order can contain an infinite number of detail records. Order header data will include Order Number, Customer ID (++), Order Date, Order Status (Open/Closed), etc. Order detail data will include Part Number (++), Quantity, Price, etc. User must be able to perform all CRUD operations on the order tables. User must be able to enter a specific Order Number to edit. User must be able to pull up a sortable/queryable search grid/utility to find an order to edit. User must be able to print an order form from within the order entry form. Validation must be performed prior to database commit. Reports Customer Listing – All Customers in the system. Inventory Listing – All parts in the system. Open Order Listing – All open orders in system. Customer Order Listing – All orders for specific customer. All reports must include sorts and filter functions where applicable. Ex. Customer Listing by range of Customer IDs. Open Order Listing by date range.

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • If Nvidia Shield can stream a game via WiFi (~150-300Mbps), where is the 1-10Gbps wired streaming?

    - by Enigma
    Facts: It is surprising and uncharacteristic that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. 150-300mbps WiFi is in no way superior to a 1000mbps+ LAN connection aside from well wireless mobility. Throughout time, (since the internet was created) wired services have **always come first yet in this particular case, the opposite seems to be true. We had wired internet first, wired audio streaming, and wired video streaming all before their wireless counterparts. Why? Largely because the wireless bandwidth was and is inferior. Even today despite being significantly better and capable of a lot more, it is still inferior to a wired connection. Situation: Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable to me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. While this secondary project will provide a wired connection, it still shouldn't be necessary to purchase a Shield to have this benefit. Not only this but Intel's WiDi claims game streaming support as well - wirelessly. Where is the wired streaming? All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I should be able to stream to my second desktop or my laptop both of which by hardware comparison are superior to the Shield. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? Reiteration of questions: Is there a technical reason (non marketing) for why Nvidia opted to bottleneck the streaming service with a wireless connection limiting the resolution to 720p and introducing intermittent video choppiness when on a wired connection one could achieve, presumably, 1080p with significantly less or zero choppiness? Is there anything limiting developers from creating a PC/Desktop application emulating the same H.264 decoding functionality that circumvents the need to get an Nvidia Shield altogether? (It is not a matter of being too cheap to support Nvidia - I have many Nvidia cards that aren't being used. One should not have to purchase specialty hardware when = hardware already exists) Same questions go for Intel Widi also. I am just utterly perplexed that there are wireless live streaming solution and yet no wired. How on earth can wireless be the goto transmission medium? Is there another solution that takes advantage of H.264 video compression allowing live streaming over a wired connection? (*) - Perhaps this isn't the first but afaik it is the first complete package. (**) - I cant back that up with hard evidence/links but someone probably could. Edit: Maybe this will be the solution I am looking for but I still find it hard to believe that they would be the first and after wireless solutions already exist. In-home Streaming You can play all your Windows and Mac games on your SteamOS machine, too. Just turn on your existing computer and run Steam as you always have - then your SteamOS machine can stream those games over your home network straight to your TV! - http://store.steampowered.com/livingroom/SteamOS/

    Read the article

  • COMException when trying to use a Library

    - by sarkie
    Hi Guys, I have an ASP.net WebService which uses a Library, this has a dependency on some third party .dlls. If I add a reference to the Library to my webservice, I get a COMException and I can't load the site. I thought it may be to do with aspnet user credentials, so I have tried impersonating and using processModel in machine.config but nothing seems to work. The .dlls are for communicating with hardware so I am not even using them on the server just other parts of the library, is there any way I can fix this? I'm running on Windows XP Pro SP3 with Visual 2008 SP1 and .net 3.5. I am thinking the only way of fixing it, is to split up the library into hardware and non-hardware based. Cheers, Sarkie The specified procedure could not be found. (Exception from HRESULT: 0x8007007F) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.COMException: The specified procedure could not be found. (Exception from HRESULT: 0x8007007F) Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [COMException (0x8007007f): The specified procedure could not be found. (Exception from HRESULT: 0x8007007F)] [FileLoadException: A procedure imported by 'OBIDISC4NETnative, Version=0.0.0.0, Culture=neutral, PublicKeyToken=900ed37a7058e4f2' could not be loaded.] System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +0 System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +43 System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +127 System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +142 System.Reflection.Assembly.Load(String assemblyString) +28 System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +46 [ConfigurationErrorsException: A procedure imported by 'OBIDISC4NETnative, Version=0.0.0.0, Culture=neutral, PublicKeyToken=900ed37a7058e4f2' could not be loaded.] System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +613 System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() +203 System.Web.Configuration.CompilationSection.LoadAssembly(AssemblyInfo ai) +105 System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) +178 System.Web.Compilation.WebDirectoryBatchCompiler..ctor(VirtualDirectory vdir) +163 System.Web.Compilation.BuildManager.BatchCompileWebDirectoryInternal(VirtualDirectory vdir, Boolean ignoreErrors) +53 System.Web.Compilation.BuildManager.BatchCompileWebDirectory(VirtualDirectory vdir, VirtualPath virtualDir, Boolean ignoreErrors) +175 System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) +83 System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) +261 System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) +101 System.Web.Compilation.BuildManager.GetVPathBuildResult(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) +83 System.Web.Compilation.BuildManager.GetVPathBuildResult(HttpContext context, VirtualPath virtualPath) +10 System.Web.UI.WebServiceParser.GetCompiledType(String inputFile, HttpContext context) +43 System.Web.Services.Protocols.WebServiceHandlerFactory.GetHandler(HttpContext context, String verb, String url, String filePath) +180 System.Web.Script.Services.ScriptHandlerFactory.GetHandler(HttpContext context, String requestType, String url, String pathTranslated) +102 System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig) +193 System.Web.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +93 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3082; ASP.NET Version:2.0.50727.3082

    Read the article

  • Problem receving in RXTX

    - by drhorrible
    I've been using RXTX for about a year now, without too many problems. I just started a new program to interact with a new piece of hardware, so I reused the connect() method I've used on my other projects, but I have a weird problem I've never seen before. The Problem The device works fine, because when I connect with hyperterminal, I send things and receive what I expect, and Serial Port Monitor(SPM) reflects this. However, when I run the simple hyperterminal-clone I wrote to diagnose the problem I'm having with my main app, bytes are sent, according to SPM, but nothing is received, and my SerialPortEventListener never fires. Even when I check for available data in the main loop, reader.ready() returns false. If I ignore this check, then I get an exception, details below. Relevant section of connect() method // Configure and open port port = (SerialPort) CommPortIdentifier.getPortIdentifier(name) .open(owner,1000) port.setSerialPortParams(baud, databits, stopbits, parity); port.setFlowControlMode(fc_mode); final BufferedReader br = new BufferedReader( new InputStreamReader( port.getInputStream(), "US-ASCII")); // Add listener to print received characters to screen port.addEventListener(new SerialPortEventListener(){ public void serialEvent(SerialPortEvent ev) { try { System.out.println("Received: "+br.readLine()); } catch (IOException e) { e.printStackTrace(); } } }); port.notifyOnDataAvailable(); Exception java.io.IOException: Underlying input stream returned zero bytes at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:268) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.read(BufferedReader.java:157) at <my code> The big question (again) I think I've eliminated all possible hardware problems, so what could be wrong with my code, or the RXTX library? Edit: something interesting When I open hyperterminal after sending a bunch of commands from java that should have gotten responses, all of the responses appear immediately, as if they had been put in the buffer somewhere, but unavailable. Edit 2: Tried something new, same results I ran the code example found here, with the same results. No data came in, but when I switched to a new program, it came all at once. Edit 3 The hardware is fine, and even a different computer has the same problem. I am not using any sort of USB adapter. I've started using PortMon, too, and it's giving me some interesting results. Hyperterminal and RXTX are not using the same settings, and RXTX always polls the port, unlike HyperTerminal, but I still can't see what settings would affect this. As soon as I can isolate the configuration from the constant polling, I'll post my PortMon logs. Edit 4 Is it possible that some sort of Windows update in the last 3 months could have caused this? It has screwed up one of my MATLAB mex-based programs once. Edit 5 I've also noticed some things that are different between HyperTerminal, RXTX, and a separate program I found that communicates with the device (but doesn't do what I want, which is why I'm rolling my own program) HyperTerminal - set to no flow control, but Serial Port Monitor's RTS and DTR indicators are green Other program - not sure what settings it thinks it's using, but only SPM's RTS indicator is green RXTX - no matter what flow control I set, only SPM's CTS and DTR indicators are on. From Serial Port Monitor's help files (paraphrased): the indicators display the state of the serial control lines RTS - Request To Send CTS - Clear To Send DTR - Data Terminal Ready

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • How to make MySQL utilize available system resources, or find "the real problem"?

    - by anonymous coward
    This is a MySQL 5.0.26 server, running on SuSE Enterprise 10. This may be a Serverfault question. The web user interface that uses these particular queries (below) is showing sometimes 30+, even up to 120+ seconds at the worst, to generate the pages involved. On development, when the queries are run alone, they take up to 20 seconds on the first run (with no query cache enabled) but anywhere from 2 to 7 seconds after that - I assume because the tables and indexes involved have been placed into ram. From what I can tell, the longest load times are caused by Read/Update Locking. These are MyISAM tables. So it looks like a long update comes in, followed by a couple 7 second queries, and they're just adding up. And I'm fine with that explanation. What I'm not fine with is that MySQL doesn't appear to be utilizing the hardware it's on, and while the bottleneck seems to be the database, I can't understand why. I would say "throw more hardware at it", but we did and it doesn't appear to have changed the situation. Viewing a 'top' during the slowest times never shows much cpu or memory utilization by mysqld, as if the server is having no trouble at all - but then, why are the queries taking so long? How can I make MySQL use the crap out of this hardware, or find out what I'm doing wrong? Extra Details: On the "Memory Health" tab in the MySQL Administrator (for Windows), the Key Buffer is less than 1/8th used - so all the indexes should be in RAM. I can provide a screen shot of any graphs that might help. So desperate to fix this issue. Suffice it to say, there is legacy code "generating" these queries, and they're pretty much stuck the way they are. I have tried every combination of Indexes on the tables involved, but any suggestions are welcome. Here's the current Create Table statement from development (the 'experimental' key I have added, seems to help a little, for the example query only): CREATE TABLE `registration_task` ( `id` varchar(36) NOT NULL default '', `date_entered` datetime NOT NULL default '0000-00-00 00:00:00', `date_modified` datetime NOT NULL default '0000-00-00 00:00:00', `assigned_user_id` varchar(36) default NULL, `modified_user_id` varchar(36) default NULL, `created_by` varchar(36) default NULL, `name` varchar(80) NOT NULL default '', `status` varchar(255) default NULL, `date_due` date default NULL, `time_due` time default NULL, `date_start` date default NULL, `time_start` time default NULL, `parent_id` varchar(36) NOT NULL default '', `priority` varchar(255) NOT NULL default '9', `description` text, `order_number` int(11) default '1', `task_number` int(11) default NULL, `depends_on_id` varchar(36) default NULL, `milestone_flag` varchar(255) default NULL, `estimated_effort` int(11) default NULL, `actual_effort` int(11) default NULL, `utilization` int(11) default '100', `percent_complete` int(11) default '0', `deleted` tinyint(1) NOT NULL default '0', `wf_task_id` varchar(36) default '0', `reg_field` varchar(8) default '', `date_offset` int(11) default '0', `date_source` varchar(10) default '', `date_completed` date default '0000-00-00', `completed_id` varchar(36) default NULL, `original_name` varchar(80) default NULL, PRIMARY KEY (`id`), KEY `idx_reg_task_p` (`deleted`,`parent_id`), KEY `By_Assignee` (`assigned_user_id`,`deleted`), KEY `status_assignee` (`status`,`deleted`), KEY `experimental` (`deleted`,`status`,`assigned_user_id`,`parent_id`,`date_due`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 And one of the ridiculous queries in question: SELECT users.user_name assigned_user_name, registration.FIELD001 parent_name, registration_task.status status, registration_task.date_modified date_modified, registration_task.date_due date_due, registration.FIELD240 assigned_wf, if(LENGTH(registration_task.description)>0,1,0) has_description, registration_task.* FROM registration_task LEFT JOIN users ON registration_task.assigned_user_id=users.id LEFT JOIN registration ON registration_task.parent_id=registration.id where (registration_task.status != 'Completed' AND registration.FIELD001 LIKE '%' AND registration_task.name LIKE '%' AND registration.FIELD060 LIKE 'GN001472%') AND registration_task.deleted=0 ORDER BY date_due asc LIMIT 0,20; my.cnf - '[mysqld]' section. [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking key_buffer = 384M max_allowed_packet = 100M table_cache = 2048 sort_buffer_size = 2M net_buffer_length = 100M read_buffer_size = 2M read_rnd_buffer_size = 160M myisam_sort_buffer_size = 128M query_cache_size = 16M query_cache_limit = 1M EXPLAIN above query, without additional index: +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ | 1 | SIMPLE | registration_task | ref | idx_reg_task_p,status_assignee | idx_reg_task_p | 1 | const | 1067354 | Using where; Using filesort | | 1 | SIMPLE | registration | eq_ref | PRIMARY,gbl | PRIMARY | 8 | sugarcrm401.registration_task.parent_id | 1 | Using where | | 1 | SIMPLE | users | ref | PRIMARY | PRIMARY | 38 | sugarcrm401.registration_task.assigned_user_id | 1 | | +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ EXPLAIN above query, with 'experimental' index: +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+ | 1 | SIMPLE | registration_task | range | idx_reg_task_p,status_assignee,NewIndex1,tcg_experimental | tcg_experimental | 259 | NULL | 103345 | Using where; Using filesort | | 1 | SIMPLE | registration | eq_ref | PRIMARY,gbl | PRIMARY | 8 | sugarcrm401.registration_task.parent_id | 1 | Using where | | 1 | SIMPLE | users | ref | PRIMARY | PRIMARY | 38 | sugarcrm401.registration_task.assigned_user_id | 1 | | +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+

    Read the article

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

  • Opening Skype, Opera, OpenOffice logs me off

    - by anjanesh
    Whats common among Skype, Opera, OpenOffice in Ubuntu ? Whenever I open these applications I get logged off and shows back me the login screen. This started happening since the 10.10 upgrade. Forgot to mention : Yes, its x64.Each time I open these applications, the UI shows and then crashes. I started each app & logged the last few lines of /var/log/syslog after each crash. Looks like something to do with sound drivers ? Opera :Jan 8 09:33:20 al-ubuntu pulseaudio[11532]: pid.c: Daemon already running. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 8. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: snd_pcm_dump(): Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Soft volume PCM Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: min_dB: -51 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: max_dB: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: resolution: 256 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Its setup is: Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stream : CAPTURE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: format : S16_LE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: subformat : STD Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: channels : 2 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: rate : 44100 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: msbits : 16 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: buffer_size : 88192 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_size : 44096 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_time : 999909 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_step : 1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: avail_min : 87310 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_event : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: start_threshold : -1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_threshold: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_size : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Its setup is: Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stream : CAPTURE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: format : S16_LE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: subformat : STD Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: channels : 2 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: rate : 44100 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: msbits : 16 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: buffer_size : 88192 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_size : 44096 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_time : 999909 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_step : 1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: avail_min : 87310 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_event : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: start_threshold : -1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_threshold: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_size : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: appl_ptr : 87320 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: hw_ptr : 87320 Jan 8 09:33:22 al-ubuntu kernel: [ 4962.078306] opera[11036]: segfault at 261 ip 0000000000000261 sp 00007fffed7cd9a8 error 14 in opera[400000+122b000] anjanesh@al-ubuntu:~$ SkypeJan 8 09:40:21 al-ubuntu pulseaudio[12602]: pid.c: Daemon already running. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 8. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: snd_pcm_dump(): Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Soft volume PCM Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: min_dB: -51 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: max_dB: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: resolution: 256 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Its setup is: Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stream : CAPTURE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: format : S16_LE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: subformat : STD Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: channels : 2 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: rate : 44100 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: msbits : 16 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: buffer_size : 88192 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_size : 44096 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_time : 999909 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_step : 1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: avail_min : 87310 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_event : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: start_threshold : -1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_threshold: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_size : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Its setup is: Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stream : CAPTURE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: format : S16_LE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: subformat : STD Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: channels : 2 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: rate : 44100 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: msbits : 16 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: buffer_size : 88192 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_size : 44096 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_time : 999909 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_step : 1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: avail_min : 87310 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_event : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: start_threshold : -1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_threshold: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_size : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: appl_ptr : 87312 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: hw_ptr : 87312 anjanesh@al-ubuntu:~$ Open OfficeJan 8 09:43:46 al-ubuntu pulseaudio[13157]: pid.c: Daemon already running. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 16. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: snd_pcm_dump(): Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Soft volume PCM Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: min_dB: -51 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: max_dB: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: resolution: 256 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Its setup is: Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stream : CAPTURE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: format : S16_LE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: subformat : STD Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: channels : 2 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: rate : 44100 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: msbits : 16 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: buffer_size : 88192 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_size : 44096 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_time : 999909 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_step : 1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: avail_min : 87310 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_event : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: start_threshold : -1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_threshold: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_size : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Its setup is: Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stream : CAPTURE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: format : S16_LE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: subformat : STD Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: channels : 2 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: rate : 44100 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: msbits : 16 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: buffer_size : 88192 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_size : 44096 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_time : 999909 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_step : 1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: avail_min : 87310 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_event : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: start_threshold : -1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_threshold: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_size : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: appl_ptr : 87320 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: hw_ptr : 87320 anjanesh@al-ubuntu:~$

    Read the article

  • MEF CompositionInitializer for WPF

    - by Reed
    The Managed Extensibility Framework is an amazingly useful addition to the .NET Framework.  I was very excited to see System.ComponentModel.Composition added to the core framework.  Personally, I feel that MEF is one tool I’ve always been missing in my .NET development. Unfortunately, one perfect scenario for MEF tends to fall short of it’s full potential is in Windows Presentation Foundation development.  In particular, there are many times when the XAML parser constructs objects in WPF development, which makes composition of those parts difficult.  The current release of MEF (Preview Release 9) addresses this for Silverlight developers via System.ComponentModel.Composition.CompositionInitializer.  However, there is no equivalent class for WPF developers. The CompositionInitializer class provides the means for an object to compose itself.  This is very useful with WPF and Silverlight development, since it allows a View, such as a UserControl, to be generated via the standard XAML parser, and still automatically pull in the appropriate ViewModel in an extensible manner.  Glenn Block has demonstrated the usage for Silverlight in detail, but the same issues apply in WPF. As an example, let’s take a look at a very simple case.  Take the following XAML for a Window: <Window x:Class="WpfApplication1.MainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="220" Width="300"> <Grid> <TextBlock Text="{Binding TheText}" /> </Grid> </Window> This does nothing but create a Window, add a simple TextBlock control, and use it to display the value of our “TheText” property in our DataContext class.  Since this is our main window, WPF will automatically construct and display this Window, so we need to handle constructing the DataContext and setting it ourselves. We could do this in code or in XAML, but in order to do it directly, we would need to hard code the ViewModel type directly into our XAML code, or we would need to construct the ViewModel class and set it in the code behind.  Both have disadvantages, and the disadvantages grow if we’re using MEF to compose our ViewModel. Ideally, we’d like to be able to have MEF construct our ViewModel for us.  This way, it can provide any construction requirements for our ViewModel via [ImportingConstructor], and it can handle fully composing the imported properties on our ViewModel.  CompositionInitializer allows this to occur. We use CompositionInitializer within our View’s constructor, and use it for self-composition of our View.  Using CompositionInitializer, we can modify our code behind to: public partial class MainView : Window { public MainView() { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import("MainViewModel")] public object ViewModel { get { return this.DataContext; } set { this.DataContext = value; } } } We then can add an Export on our ViewModel class like so: [Export("MainViewModel")] public class MainViewModel { public string TheText { get { return "Hello World!"; } } } MEF will automatically compose our application, decoupling our ViewModel injection to the DataContext of our View until runtime.  When we run this, we’ll see: There are many other approaches for using MEF to wire up the extensible parts within your application, of course.  However, any time an object is going to be constructed by code outside of your control, CompositionInitializer allows us to continue to use MEF to satisfy the import requirements of that object. In order to use this from WPF, I’ve ported the code from MEF Preview 9 and Glenn Block’s (now obsolete) PartInitializer port to Windows Presentation Foundation.  There are some subtle changes from the Silverlight port, mainly to handle running in a desktop application context.  The default behavior of my port is to construct an AggregateCatalog containing a DirectoryCatalog set to the location of the entry assembly of the application.  In addition, if an “Extensions” folder exists under the entry assembly’s directory, a second DirectoryCatalog for that folder will be included.  This behavior can be overridden by specifying a CompositionContainer or one or more ComposablePartCatalogs to the System.ComponentModel.Composition.Hosting.CompositionHost static class prior to the first use of CompositionInitializer. Please download CompositionInitializer and CompositionHost for VS 2010 RC, and contact me with any feedback. Composition.Initialization.Desktop.zip Edit on 3/29: Glenn Block has since updated his version of CompositionInitializer (and ExportFactory<T>!), and made it available here: http://cid-f8b2fd72406fb218.skydrive.live.com/self.aspx/blog/Composition.Initialization.Desktop.zip This is a .NET 3.5 solution, and should soon be pushed to CodePlex, and made available on the main MEF site.

    Read the article

  • Using Microsoft Office 2007 with E-Business Suite Release 12

    - by Steven Chan
    Many products in the Oracle E-Business Suite offer optional integrations with Microsoft Office and Microsoft Projects.  For example, some EBS products can export tabular reports to Microsoft Excel.  Some EBS products integrate directly with Microsoft products, and others work through the Applications Desktop Integrator (WebADI and ADI) as an intermediary.These EBS integrations have historically been documented in their respective product-specific documentation.  In other words, if an EBS product in the Oracle Financials family supported an integration with, say, Microsoft Excel, it was up to the product team to document that in the Oracle Financials documentation.Some EBS systems administrators have found the process of hunting through the various product-specific documents for Office-related information to be a bit difficult.  In response to your Service Requests and emails, we've released a new document that consolidates and summarises all patching and configuration requirements for EBS products with MS Office integration points in a single place:Using Microsoft Office 2007 with Oracle E-Business Suite 11i and R12 (Note 1072807.1)

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >