Search Results

Search found 66801 results on 2673 pages for 'open type fonts'.

Page 524/2673 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • Ubuntu 12.04 patched b43 driver compilation error

    - by Zed
    I tried this How do I install this patched b43 driver? guide to install patched b43 driver on Ubuntu 12.04 with 3.2.0-31-generic kernel but I can't pass compilation phase.Here is what I did: wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.1/compat-wireless-3.1.1-1.tar.bz2 cd compat-wireless-3.1.1-1/ scripts/driver-select b43 make make -C /lib/modules/3.2.0-31-generic/build M=/home/marco/compat-wireless-3.1.1-1 modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-31-generic' CC [M] /home/marco/compat-wireless-3.1.1-1/compat/main.o In file included from /home/marco/compat-wireless-3.1.1-1/include/linux/compat-2.6.29.h:5:0, from /home/marco/compat-wireless-3.1.1-1/include/linux/compat-2.6.h:24, from <command-line>:0: include/linux/netdevice.h:1153:5: warning: "IS_ENABLED" is not defined [-Wundef] include/linux/netdevice.h:1153:15: error: missing binary operator before token "(" include/linux/netdevice.h: In function ‘netdev_uses_dsa_tags’: include/linux/netdevice.h:1421:9: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h:1422:31: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h: In function ‘netdev_uses_trailer_tags’: include/linux/netdevice.h:1431:9: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h:1432:35: error: ‘struct net_device’ has no member named ‘dsa_ptr’ make[3]: *** [/home/marco/compat-wireless-3.1.1-1/compat/main.o] Error 1 make[2]: *** [/home/marco/compat-wireless-3.1.1-1/compat] Error 2 make[1]: *** [_module_/home/marco/compat-wireless-3.1.1-1] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-31-generic' make: *** [modules] Error 2 To fix that error I added #include <linux/kconfig.h> to /usr/src/linux-headers-3.2.0-31-generic/include/linux/netdevice.h but now I'm getting something else make make -C /lib/modules/3.2.0-31-generic/build M=/home/marco/compat-wireless-3.1.1-1 modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-31-generic' CC [M] /home/marco/compat-wireless-3.1.1-1/compat/main.o LD [M] /home/marco/compat-wireless-3.1.1-1/compat/compat.o CC [M] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.o In file included from /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h:9:0, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/bcma_private.h:8, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:8: /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h: In function ‘ssb_driver_register’: /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h:236:36: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h:236:36: note: each undeclared identifier is reported only once for each function it appears in In file included from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/bcma_private.h:8:0, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:8: /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h: In function ‘bcma_driver_register’: /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h:170:37: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c: At top level: /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:12:20: error: expected declaration specifiers or ‘...’ before string constant /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:13:16: error: expected declaration specifiers or ‘...’ before string constant /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: data definition has no type or storage class [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’ [-Wimplicit-int] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: parameter names (without types) in function declaration [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: data definition has no type or storage class [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’ [-Wimplicit-int] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: parameter names (without types) in function declaration [enabled by default] make[3]: *** [/home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.o] Error 1 make[2]: *** [/home/marco/compat-wireless-3.1.1-1/drivers/bcma] Error 2 make[1]: *** [_module_/home/marco/compat-wireless-3.1.1-1] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-31-generic' make: *** [modules] Error 2 Any suggestion what to try next?

    Read the article

  • UPK 3.6.1 (is Coming)

    - by marc.santosusso
    In anticipation of the release of UPK 3.6.1, I'd like to briefly describe some of the features that will be available in this new version. Topic Editor in Tabs Topic Editors now open in tabs instead of separate Developer windows. This offers several improvements: First, the bubble editor can be docked and resized in the same way as other editor panes. That's right, you can resize the bubble editor! The second enhancement that this changes brings is an improved undo and redo which allows each action to be undone and redone in the Topic Editor. New Sound Editor The topic and web page editors include a new sound editor with all the bells and whistles necessary to record, edit, import, and export, sound. Sound can be captured during topic recording--which is great for a Subject Matter Expert (SME) to narrate what they're recording--or after the topic has been recorded. Sound can also be added to web pages and played on the concept panes of modules, sections and topics. Turn off bubbles in Topics Authors may opt to hide bubbles either per frame or for an entire topic. When you want to draw a user's attention to the content on the screen instead of the bubble. This feature works extremely well in conjunction with the new sound capabilities. For instance, consider recording conceptual information with narration and no bubbles. Presentation Output UPK content can be published as a Presentation in Microsoft PowerPoint format. Publishing for Presentation will create a presentation for each topic published. The presentation template can be customized Using the same methods offered for the UPK document outputs, allowing your UPK-generated presentations to match your corporate branding. Autosave and Recovery The Developer will automatically save your work as often as you would like. This affords authors the ability to recover these automatically saved documents if their system or UPK were to close unexpectedly. The Developer defaults to save open documents every ten minutes. Package Editor Enhancement Files in packages will now open in the associated application when double-clicked. Authors can also choose to "Open with..." from the context menu (AKA right click menu.) See It! Window See It! mode may now be launched in a non-fullscreen window. This is available from the kp.html file in any Player package. This version of See It! mode offers on-screen navigation controls including previous frame, next frame, pause etc. Firefox Enhancments The UPK Player will now offer both Do It! mode and sound playback when viewed using Firefox web browser. Player Support for Safari The UPK Player is now fully supported on the Safari web browser for both Mac OS and Windows platforms. Keep document checked out Authors may choose to keep a document checked out when performing a check in. This allows an author to have a new version created on the server and continue editing. Close button on individual tabs A close button has been added to the tabs making it easier to close a specific tab. Outline Editor Enhancements Authors will have the option to prevent concepts from immediately displaying in the Developer when an outline item is selected. This makes it faster to move around in the outline editor. Tell us which feature you're most excited to use in the comments.

    Read the article

  • How to Create Views for All Tables with Oracle SQL Developer

    - by thatjeffsmith
    Got this question over the weekend via a friend and Oracle ACE Director, so I thought I would share the answer here. If you want to quickly generate DDL to create VIEWs for all the tables in your system, the easiest way to do that with SQL Developer is to create a data model. Wait, why would I want to do this? StackOverflow has a few things to say on this subject… So, start with importing a data dictionary. Step One: Open of Create a Model In SQL Developer, go to View – Data Modeler – Browser. Then in the browser panel, expand your design and create a new Relational Model. Step Two: Import your Data Dictionary This is a fancy way of saying, ‘suck objects out of the database into my model’ This will open a wizard to connect, select your schema(s), objects, etc. Once they’re in your model, you’re ready to cook with gas I’m using HR (Human Resources) for this example. You should end up with something that looks like this. Our favorite HR model Now we’re ready to generate the views! Step Three: Auto-generate the Views Go to Tools – Data Modeler – Table to View Wizard. I don’t want all my tables included, and I want to change the naming standard Decide if you want to change the default generated view names By default the views will be created as ‘V_TABLE_NAME.’ If you don’t like the ‘V_’ you can enter your own. You also can reference the object and model name with variables as shown in the screenshot above. I’m going to go with something a little more personal. The views are the little green boxes in the diagram Can’t find your views? They should be grouped together in your diagram. Don’t forget to use the Navigator to easily find and navigate to those model diagram objects! Step Four: Generate the DDL Ok, let’s use the Generate DDL button on the toolbar. Un-check everything but your views If you used a prefix, take advantage of that to create a filter. You might have existing views in your model that you don’t want to include, right? Once you click ‘OK’ the DDL will be generated. -- Generated by Oracle SQL Developer Data Modeler 4.0.0.825 -- at: 2013-11-04 10:26:39 EST -- site: Oracle Database 11g -- type: Oracle Database 11g CREATE OR REPLACE VIEW HR.TJS_BLOG_COUNTRIES ( COUNTRY_ID , COUNTRY_NAME , REGION_ID ) AS SELECT COUNTRY_ID , COUNTRY_NAME , REGION_ID FROM HR.COUNTRIES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_EMPLOYEES ( EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID FROM HR.EMPLOYEES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOBS ( JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY ) AS SELECT JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY FROM HR.JOBS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOB_HISTORY ( EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID FROM HR.JOB_HISTORY ; CREATE OR REPLACE VIEW HR.TJS_BLOG_LOCATIONS ( LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID ) AS SELECT LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID FROM HR.LOCATIONS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_REGIONS ( REGION_ID , REGION_NAME ) AS SELECT REGION_ID , REGION_NAME FROM HR.REGIONS ; -- Oracle SQL Developer Data Modeler Summary Report: -- -- CREATE TABLE 0 -- CREATE INDEX 0 -- ALTER TABLE 0 -- CREATE VIEW 6 -- CREATE PACKAGE 0 -- CREATE PACKAGE BODY 0 -- CREATE PROCEDURE 0 -- CREATE FUNCTION 0 -- CREATE TRIGGER 0 -- ALTER TRIGGER 0 -- CREATE COLLECTION TYPE 0 -- CREATE STRUCTURED TYPE 0 -- CREATE STRUCTURED TYPE BODY 0 -- CREATE CLUSTER 0 -- CREATE CONTEXT 0 -- CREATE DATABASE 0 -- CREATE DIMENSION 0 -- CREATE DIRECTORY 0 -- CREATE DISK GROUP 0 -- CREATE ROLE 0 -- CREATE ROLLBACK SEGMENT 0 -- CREATE SEQUENCE 0 -- CREATE MATERIALIZED VIEW 0 -- CREATE SYNONYM 0 -- CREATE TABLESPACE 0 -- CREATE USER 0 -- -- DROP TABLESPACE 0 -- DROP DATABASE 0 -- -- REDACTION POLICY 0 -- -- ERRORS 0 -- WARNINGS 0 You can then choose to save this to a file or not. This has a few steps, but as the number of tables in your system increases, so does the amount of time this feature can save you!

    Read the article

  • Error Handling Examples(C#)

    “The purpose of reviewing the Error Handling code is to assure that the application fails safely under all possible error conditions, expected and unexpected. No sensitive information is presented to the user when an error occurs.” (OWASP, 2011) No Error Handling The absence of error handling is still a form of error handling. Based on the code in Figure 1, if an error occurred and was not handled within either the ReadXml or BuildRequest methods the error would bubble up to the Search method. Since this method does not handle any acceptations the error will then bubble up the stack trace. If this continues and the error is not handled within the application then the environment in which the application is running will notify the user running the application that an error occurred based on what type of application. Figure 1: No Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); dt.ReadXml(BuildRequest(searchTerm, resultCount)); return dt; } Generic Error Handling One simple way to add error handling is to catch all errors by default. If you examine the code in Figure 2, you will see a try-catch block. On April 6th 2010 Louis Lazaris clearly describes a Try Catch statement by defining both the Try and Catch aspects of the statement. “The try portion is where you would put any code that might throw an error. In other words, all significant code should go in the try section. The catch section will also hold code, but that section is not vital to the running of the application. So, if you removed the try-catch statement altogether, the section of code inside the try part would still be the same, but all the code inside the catch would be removed.” (Lazaris, 2010) He also states that all errors that occur in the try section cause it to stops the execution of the try section and redirects all execution to the catch section. The catch section receives an object containing information about the error that occurred so that they system can gracefully handle the error properly. When errors occur they commonly log them in some form. This form could be an email, database entry, web service call, log file, or just an error massage displayed to the user.  Depending on the error sometimes applications can recover, while others force an application to close. Figure 2: Generic Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (Exception ex) { // Handle all Exceptions } return dt; } Error Specific Error Handling Like the Generic Error Handling, Error Specific error handling allows for the catching of specific known errors that may occur. For example wrapping a try catch statement around a soap web service call would allow the application to handle any error that was generated by the soap web service. Now, if the systems wanted to send a message to the web service provider every time a soap error occurred but did not want to notify them if any other type of error occurred like a network time out issue. This would be varying tedious to accomplish using the General Error Handling methodology. This brings us to the use case for using the Error Specific error handling methodology.  The Error Specific Error handling methodology allows for the TryCatch statement to catch various types of errors depending on the type of error that occurred. In Figure 3, the code attempts to handle DataException differently compared to how it potentially handles all other errors. This allows for specific error handling for each type of known error, and still allows for error handling of any unknown error that my occur during the execution of the TryCatch statement. Figure 5: Error Specific Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (TimeoutException ex) { // Handle Timeout TimeoutException Only } catch (Exception) { // Handle all Exceptions } return dt; }

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Desktop Fun: Valentine’s Day 2011 Wallpaper Collection [Bonus Edition]

    - by Asian Angel
    First, we brought you fonts for your Valentine’s Day stationary needs followed by icon packs to help customize your desktop. Today we finish our romantic holiday trio out with a larger than normal size wallpaper collection Latest Features How-To Geek ETC How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7 HTC Home Brings HTC’s Weather Widget to Your Windows Desktop Apps Uninstall Batch Removes Android Applications

    Read the article

  • GMail suspects confirmation email in stealing personal information

    - by Dennis Gorelik
    When user registers on my web site, web site sends user email confirmation link. Subject: Please confirm your email address Body:Please open this link in your browser to confirm your email address: http://www.postjobfree.com/a/c301718062444f96ba0e358ea833c9b3 This link will expire on: 6/9/2012 8:04:07 PM EST. If my web site sends that email to GMaill (either @gmail.com or another domain that's handled by Google Apps) and that user never emailed to email -- then GMail not only puts the email to spam folder, but also adds prominent red warning:Be careful with this message. Similar messages were used to steal people's personal information. Unless you trust the sender, don't click links or reply with personal information. Learn more That warning really scares many of my users, so they are afraid to open that link and confirm their email. What can I do about it? Ideally I would like that message end up in user's inbox, not spam folder. But at least how do I prevent that scary message? IP address of my mailing server is not blacklisted: http://www.mxtoolbox.com/SuperTool.aspx?action=blacklist%3a208.43.198.72 I use SPF and DKIM signature. Below is the email that ended up in spam folder with that scary red message. Delivered-To: [email protected] Received: by 10.112.84.98 with SMTP id x2csp36568lby; Fri, 8 Jun 2012 17:04:15 -0700 (PDT) Received: by 10.60.25.6 with SMTP id y6mr9110318oef.42.1339200255375; Fri, 08 Jun 2012 17:04:15 -0700 (PDT) Return-Path: Received: from smtp.postjobfree.com (smtp.postjobfree.com. [208.43.198.72]) by mx.google.com with ESMTP id v8si6058193oev.44.2012.06.08.17.04.14; Fri, 08 Jun 2012 17:04:15 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 208.43.198.72 as permitted sender) client-ip=208.43.198.72; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 208.43.198.72 as permitted sender) [email protected]; dkim=pass [email protected] DomainKey-Signature: a=rsa-sha1; c=nofws; q=dns; d=postjobfree.com; s=postjobfree.com; h= received:message-id:mime-version:from:to:date:subject:content-type; b=TCip/3hP1WWViWB1cdAzMFPjyi/aUKXQbuSTVpEO7qr8x3WdMFhJCqZciA69S0HB4 Koatk2cQQ3fOilr4ledCgZYemLSJgwa/ZRhObnqgPHAglkBy8/RAwkrwaE0GjLKup 0XI6G2wPlh+ReR+inkMwhCPHFInmvrh4evlBx/VlA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=postjobfree.com; s=postjobfree.com; h=content-type:subject:date:to:from:mime-version:message-id; bh=N59EIgRECIlAnd41LY4HY/OFI+v1p7t5M9yP+3FsKXY=; b=J3/BdZmpjzP4I6GA4ntmi4REu5PpOcmyzEL+6i7y7LaTR8tuc2h7fdW4HaMPlB7za Lj4NJPed61ErumO66eG4urd1UfyaRDtszWeuIbcIUqzwYpnMZ8ytaj8DPcWPE3JYj oKhcYyiVbgiFjLujib3/2k2PqDIrNutRH9Ln7puz4= Received: from sv3035 (sv3035 [208.43.198.72]) by smtp.postjobfree.com with SMTP; Fri, 8 Jun 2012 20:04:07 -0400 Message-ID: MIME-Version: 1.0 From: "PostJobFree Notification" To: [email protected] Date: 8 Jun 2012 20:04:07 -0400 Subject: Please confirm your email address Content-Type: multipart/alternative; boundary=--boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Please open this link in your browser to confirm your email addre= ss: =0D=0Ahttp://www.postjobfree.com/a/c301718062444f96ba0e358ea8= 33c9b3 =0D=0AThis link will expire on: 6/9/2012 8:04:07 PM EST. =0D=0A ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: base64 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij48L2hlYWQ+DQo8Ym9keT48ZGl2 Pg0KUGxlYXNlIG9wZW4gdGhpcyBsaW5rIGluIHlvdXIgYnJvd3NlciB0byBjb25m aXJtIHlvdXIgZW1haWwgYWRkcmVzczo8YnIgLz48YSBocmVmPSJodHRwOi8vd3d3 LnBvc3Rqb2JmcmVlLmNvbS9hL2MzMDE3MTgwNjI0NDRmOTZiYTBlMzU4ZWE4MzNj OWIzIj5odHRwOi8vd3d3LnBvc3Rqb2JmcmVlLmNvbS9hL2MzMDE3MTgwNjI0NDRm OTZiYTBlMzU4ZWE4MzNjOWIzPC9hPjxiciAvPlRoaXMgbGluayB3aWxsIGV4cGly ZSBvbjogNi85LzIwMTIgODowNDowNyBQTSBFU1QuPGJyIC8+DQo8L2Rpdj48L2Jv ZHk+PC9odG1sPg== ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08--

    Read the article

  • Java Spotlight Episode 57: Live From #Devoxx - Ben Evans and Martijn Verburg of the London JUG with Yara Senger of SouJava

    - by Roger Brinkley
    Tweet Live from Devoxx 11,  an interview with Ben Evans and Martijn Verburg from the London JUG along with  Yara Senger from the SouJava JUG on the JCP Executive Committee Elections, JSR 248, and Adopt-a-JSR program. Both the London JUG and SouJava JUG are JCP Standard Edition Executive Committee Members. Joining us this week on the Java All Star Developer Panel are Geertjan Wielenga, Principal Product Manger in Oracle Developer Tools; Stephen Chin, Java Champion and Java FX expert; and Antonio Goncalves, Paris JUG leader. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News Netbeans 7.1 JDK 7 upgrade tools Netbeans First Patch Program OpenJFX approved as an OpenJDK project Devoxx France April 18-20, 2012 Events Nov 22-25, OTN Developer Days in the Nordics Nov 22-23, Goto Conference, Prague Dec 6-8, Java One Brazil, Sao Paulo Feature interview Ben Evans has lived in "Interesting Times" in technology - he was the lead performance testing engineer for the Google IPO, worked on the initial UK trials of 3G networks with BT, built award-winning websites for some of Hollywood's biggest hits of the 90s, rearchitected and reimagined technology helping some of the most vulnerable people in the UK and has worked on everything from some of the UKs very first ecommerce sites, through to multi-billion dollar currency trading systems. He helps to run the London Java Community, and represents the JUG on the Java SE/EE Executive Committee. His first book "The Well-Grounded Java Developer" (with Martijn Verburg) has just been published by Manning. Martijn Verburg (aka 'the Diabolical Developer') herds Cats in the Java/open source communities and is constantly humbled by the creative power to be found there. Currently he resides in London where he co-leads the London JUG (a JCP EC member), runs a couple of open source projects & drinks too much beer at his local pub. You can find him online moderating at the Javaranch or discussing (ranting?) subjects on the Prgorammers Stack Exchange site. Most recently he's become a regular speaker at conferences on Java, open source and software development and has recently wrapped up his first Manning title - "The Well-Grounded Java Developer" with his co-author Ben Evans. Yara Senger is the partner and director of teacher education and Globalcode, graduated from the University of Sao Paulo, Sao Carlos, has significant experience in Brazil and abroad in developing solutions to critical Java. She is the co-creator of Java programs Academy and Academy of Web Developer, accumulating over 1000 hours in the classroom teaching Java. She currently serves as the President of Sou Java. In this interview Ben, Martijn, and Yara talk about the JCP Executive Committee Elections, JSR 348, and the Adopt-a-JSR program. Mail Bag What's Cool Show Transcripts Transcript for this show is available here when available.

    Read the article

  • SharePoint OCR image files indexing

    Introduction This article describes how to setup indexing of the image files (including TIFF, PDF, JPEG, BMP...) using OCR technology. The indexing described below utilizes Microsoft IFilter technology and as such is not specific to SharePoint, but can be used with any product that uses Microsoft indexing: Microsoft Search, Desktop search, SQL Server search, and through the plug-ins with Google desktop search. I however use it with Microsoft Windows SharePoint Services 2003. For those other products, the registration may need to be slightly different. Background  One of the projects I was working on required a storage of old documents scanned into PDF files. Then there was a separate team of people responsible for providing a tags for a search engine so those image documents could be found. The whole process was clumsy, labor intensive, and error prone. That was what started me on my exploration path. OCR The first search I fired was for the Open Source OCR products. Pretty quickly, I narrowed it down to TESSERACT (http://code.google.com/p/tesseract-ocr/). Tesseract is an orphaned brain child of HP that worked on it from 1985 to 1995. Then it was moved to the Open Source, and now if I understand it correctly, Google is working on it. With credentials like that, it's no wonder that Tesseract scores one of the highest marks on OCR recognition and accuracy. After downloading and struggling just a bit, I got Tesseract to work. The struggling part was that the home page claims that its base input format is a TIFF file. May be my TIFFs were bad, but I was able to get it to work only for BMP files. Image files conversion So now that I have an OCR that can convert BMP files into text, how do I get text out of the image PDF files? One more search, and I settled down on ImageMagic (http://www.imagemagick.org/). This is another wonderful Open Source utility that can convert any file into image. It did work out of the box, converting any TIFF files into bitmaps, but to get PDF files converted, it requires a GhostScript (http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/gs864/gs864w32.exe). Dealing with text PDFs With that utility installed, I was cooking - I can convert any file (in particular PDF and TIFF) into bitmap, and then I can extract the text out of the bitmap. The only consideration was to somehow treat PDF files containing text differently - after all, OCR is very computation intensive and somewhat error prone even with perfect image quality and resolution. So another quick search, and I have a PDFTOTEXT (ftp://ftp.foolabs.com/pub/xpdf/xpdf-3.02pl4-win32.zip) - thank God for Open Source! With these guys, I can pull text out of PDF in an eye blink. However, I would get nothing for pure image PDFs, but I already have a solution for that! Batch process It took another 15 minutes to setup a batch script to automate the process: Check the file extension If file is a PDF file try to extract text out of it if there is more than certain amount of text in the file - done! if there is no text, convert first page into bitmap run OCR on the bitmap For any other file type, convert file into bitmap Run OCR on the bitmap Once you unzip the attached project, check out the bin\OCR.BAT file. It will create a temporary file in the directory where your source file is with the same name + the '.txt' extension.Continue span.fullpost {display:none;}

    Read the article

  • mono 3.0.2 + xsp + lighttpd delivers empty page

    - by Nefal Warnets
    I needed MVC 4 (and basic .NET 4.5) support so I downloaded mono 3.0.2 and deployed it on an lighttpd 1.4.28 installation, together with xsp-2.10.2 (was the latest I could find). After going through the config tutorials I managed to get the fastcgi server to spawn, but all pages are served empty. even if I go to nonexistant urls or direct .aspx files I get an empty HTTP 200 response. The log file on Debug shows nothing suspicious. Here is the log: [2012-12-12 15:15:38Z] Debug Accepting an incoming connection. [2012-12-12 15:15:38Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 801) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_SOFTWARE = lighttpd/1.4.28) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_NAME = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (GATEWAY_INTERFACE = CGI/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PORT = 80) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_PORT = xxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_NAME = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (PATH_INFO = ) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_FILENAME = /data/htdocs/ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (DOCUMENT_ROOT = /data/htdocs) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_URI = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (QUERY_STRING = ) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_METHOD = GET) [2012-12-12 15:15:38Z] Debug Read parameter. (REDIRECT_STATUS = 200) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PROTOCOL = HTTP/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_HOST = xxxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CACHE_CONTROL = max-age=0) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip,deflate,sdch) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-US,en;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.3) [2012-12-12 15:15:38Z] Debug Record received. (Type: StandardInput, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) lighttpd config: server.modules += ( "mod_fastcgi" ) include "conf.d/mono.conf" $HTTP["host"] !~ "^vdn\." { $HTTP["url"] !~ "\.(jpg|gif|png|js|css|swf|ico|jpeg|mp4|flv|zip|7z|rar|psd|pdf|html|htm)$" { fastcgi.server += ( "" => (( "socket" => mono_shared_dir + "fastcgi-mono-server", "bin-path" => mono_fastcgi_server, "bin-environment" => ( "PATH" => mono_dir + "bin:/bin:/usr/bin:", "LD_LIBRARY_PATH" => mono_dir + "lib:", "MONO_SHARED_DIR" => mono_shared_dir, "MONO_FCGI_LOGLEVELS" => "Debug", "MONO_FCGI_LOGFILE" => mono_shared_dir + "fastcgi.log", "MONO_FCGI_ROOT" => mono_fcgi_root, "MONO_FCGI_APPLICATIONS" => mono_fcgi_applications ), "max-procs" => 1, "check-local" => "disable" )) ) } } the referenced mono.conf index-file.names += ( "index.aspx", "default.aspx" ) var.mono_dir = "/usr/" var.mono_shared_dir = "/tmp/" var.mono_fastcgi_server = mono_dir + "bin/" + "fastcgi-mono-server4" var.mono_fcgi_root = server.document-root var.mono_fcgi_applications = "/:." The document root for this server is /data/htdocs. The asp.net files reside there. lighttpd error logs show nothing. Every help is greatly appreciated!

    Read the article

  • Hello World Pagelet

    - by astemkov
    Introduction The goal of this exercise is to give you a basic feel of how you can use Pagelet Producer to proxy a web page We will proxy a simple static Hello World web page, cut one section out of that page and present it as a pagelet that you can later insert on your own application page or to your portal page such as WebCenter Portal space or WebCenter Interaction community page. Hello World sample app This is the static web page we will work with: Let's assume the following: The Hello World web page is running on server http://appserver.company.com:1234/ The Hello World web page path is: http://appserver.company.com:1234/helloworld/ Initial Pagelet Producer setup Let's assume that the Pagelet Producer server is running on http://pageletserver.company.com:8889/pagelets/ First let's check that Pagelet Producer is up and running. In order to do that we just need to access the following URL: http://pageletserver.company.com:8889/pagelets/ And this is what should be returned: Now you can access Pagelet Producer administration screens using this URL: http://pageletserver.company.com:8889/pagelets/admin This is how the UI looks: Now if you connect to the internet via proxy server, you need to configure proxy in Pagelet Producer settings. In the Navigator pane: Jump To - Settings Click on "Proxy" Enter your proxy server configuration: Creating a resource First thing that you need to do is to create a resource for your web page. This will tell Pagelet Producer that all sub-paths of the web page should be proxied. It also will allow you to setup common rules of how your web page should be proxied and will serve as a container for your pagelets. In the Navigator pane: Jump To - Resources Click on any existing resource (ex. welcome_resource) Click on "Create selected type" toolbar button at the top of the Navigator pane Select "Web" in the "Select Producer Type" dialog box and click "OK" Now after the resource is created let's click on "General" sub-item a specify the following values Name = AppServer Source URL = http://appserver.company.com:1234/ Destination URL = /appserver/ Click on "Save" toolbar button at the top of the Navigator pane After the resource is created our web page becomes accessible by the URL: http://pageletserver.company.com:8889/pagelets/appserver/helloworld/ So in original web page address Source URL is replaced with Pagelet Producer URL (http://pageletserver.company.com:8889/pagelets) + Destination URL Creating a pagelet Now let's create "Hello World" pagelet. Under the resource node activate Pagelets subnode Click on "Create selected type" toolbar button at the top of the Navigator pane Click on "General" sub-node of newly created pagelet and specify the following values Name = Hello_World Library = MyLib Library is used for logical grouping. The portals use the "Library" value to group pagelets in their respective UI's. For example, when adding pagelets to a WebCenter Portal space you would see the individual pagelets listed under the "Library" name. URL Suffix = helloworld/index.html this is where the Hello World page html is served from Click on "Save" toolbar button at the top of the Navigator pane The Library name can be anything you want, it doesn't have to match the resource name at all. It is used as a logical grouping of pagelets, and you can include pagelets from multiple resources into the same library or create a new library for each pagelet. After you save the pagelet you can access it here: http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/MyLib/Hello_World which is : http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/ + [Library] + [Name] Or to test the injection of a pagelet into iframe you can click on the pagelets "Documentation" sub-node and use "Access Pagelet using REST" URL: This is what we will see: Clipping The pagelet that we just created covers the whole web page, but we want just the "Hello World" segment of it. So let's clip it. Under the Hello_World pagelet node activate Clipper sub-node Click on "Create selected type" toolbar button at the top of the Navigator pane Specify a Name for newly created clipper. For example: "c1" Click on "Content" sub-node of the clipper Click on "Launch Clipper" button New browser window will open By moving a mouse pointer over the web page select the area you want to clip: Click left mouse button - the browser window will disappear and you will see that Clipping Path was automatically generated Now let's save and access the link from the "Documentation" page again Here's our pagelet nicely clipped and ready for being used on your Web Center Space

    Read the article

  • OWB 11gR2 - Find and Search Metadata in Designer

    - by David Allan
    Here are some tools and techniques for finding objects, specifically in the design repository. There are ways of navigating and collating objects that are useful for day to day development and build-time usage - this includes features out of the box and utilities constructed on top. There are a variety of techniques to navigate and find objects in the repository, the first 3 are out of the box, the 4th is an expert utility. Navigating by the tree, grouping by project and module - ok if you are aware of the exact module/folder that objects reside in. The structure panel is a useful way of finding parts of an object, especially when large rather than using the canvas. In large scale projects it helps to have accelerators (either find or collections below). Advanced find to search by name - 11gR2 included a find capability specifically for large scale projects. There were improvements in both the tree search and the object editors (including highlighting in mapping for example). So you can now do regular expression based search and quickly navigate to objects within a repository. Collections - logically organize your objects into virtual folders by shortcutting the actual objects. This is useful for a range of things since all the OWB services operate on collections too (export/import, validation, deployment). See the post here for new collection functionality in 11gR2. Reports for searching by type, updated on, updated by etc. Useful for activities such as periodic incremental actions (deploy all mappings changed in the past week). The report style view is useful since I can quickly see who changed what and when. You can see all the audit details for objects within each objects property inspector, but its useful to just get all objects changed today or example, all objects changed since my last build etc. This utility combines both UI extensions via experts and the public views on the repository. In the figure to the right you see the contextual option 'Object Search' which invokes the utility, you can see I have quite a number of modules within my project. Figure out all the potential objects which have been changed is not simple. The utility is an expert which provides this kind of search capability. The utility provides a report of the objects in the design repository which satisfy some filter criteria. The type of criteria includes; objects updated in the last n days optionally filter the objects updated by user filter the user by project and by type (table/mappings etc.) The search dialog appears with these options, you can multi-select the object types, so for example you can select TABLE and MAPPING. Its also possible to search across projects if need be. If you have multiple users using the repository you can define the OWB user name in the 'Updated by' property to restrict the report to just that user also. Finally there is a search name that will be used for some of the options such as building a collection - this name is used for the collection to be built. In the example I have done, I've just searched my project for all process flows and mappings that users have updated in the last 7 days. The results of the query are returned in a table containing the object names, types, full path and audit details. The columns are sort-able, you can sort the results by name, type, path etc. One of the cool things here, is that you can then perform operations on these objects - such as edit them, export single selection or entire results to MDL, create a collection from the results (now you have a saved set of references in the repository, you could do deploy/export etc.), create a deployment script from the results...or even add in your own ideas! You see from this that you can do bulk operations on sets of objects based on search results. So for example selecting the 'Build Collection' option creates a collection with all of the objects from my search, you can subsequently deploy/generate/maintain this collection of objects. Under the hood of the expert if just basic OMB commands from the product and the use of the public views on the design repository. You can see how easy it is to build up macro-like capabilities that will help you do day-to-day as well as build like tasks on sets of objects.

    Read the article

  • Dynamic Class Inheritance For PHP

    - by VirtuosiMedia
    I have a situation where I think I might need dynamic class inheritance in PHP 5.3, but the idea doesn't sit well and I'm looking for a different design pattern to solve my problem if it's possible. Use Case I have a set of DB abstraction layer classes that dynamically compiles SQL queries, with one DAL class for each DB type (MySQL, MsSQL, Oracle, etc.). Each table in the database has its own class that extends the appropriate DAL class. The idea is that you interact with the table classes, but never directly use the DAL class. If you want to support a different DB type for your app, you don't need to rewrite any queries or even any code, you simply change a setting that swaps one DAL class out for another...and that's it. To give you a better idea of how this is used, you can take a look at the DAL class, the table classes, and how they are used on this StackExchange Code Review page. To really understand what I'm trying to do, please take a look at my implementation first before suggesting a solution. Issues The strategy that I had used previously was to have all of the DAL classes share the same class name. This eliminated autoloading, so I had to manually load the appropriate DAL class in a switch statement. However, this approach presents some problems for testing and documentation purposes, so I'd like to find a different way to solve the problem of loading the correct DAL class more elegantly. Update to clarify the issue The problem basically boils down to inconsistencies in the class name (pre-PHP 5.3) or class namespace (PHP 5.3) and its location in the directory structure. At this point, all of my DAL classes have the same name, DBObject, but reside in different folders, MySQL, Oracle, etc. My table classes all extend DBObject, but which DBObject they extend varies depending on which one has been loaded. Basically, I'm trying to have my cake and eat it too. The table classes act as a stable API and extend a dynamic backend, the DAL (DBObject) classes. It works great, but I outsmarted myself and because of the inconsistencies with the class names and their locations, I can't autoload the DBObject, which makes running unit tests and generating API docs impossible for the DBObject classes because the tests and docs rely on auto-loading. Just loading the appropriate DBObject into memory using a factory method won't work because there will be times when I need to load multiple DBObjects for testing. Because the classes currently share a name, this causes a class is already defined error. I can make exceptions for the DBObjects in my test code, obviously, but I'm looking for something a little less hacky as there may future instances where something similar would need to be done. Solutions? Worst case scenario, I can continue my current strategy, but I don't like it very much, especially as I'll soon be converting my code to PHP 5.3. I suspect that I can use some sort of dynamic inheritance via either namespaces (preferred) or a dynamic class extension, but I haven't been able to find good examples of this implemented in the wild. In your answers, please suggest either an alternate pattern that would work for this use case or an example of dynamic inheritance done right. Please assume PHP 5.3 with namespaced code. Any code examples are greatly encouraged. The preferred constraints for the solution are: DAL class can be autoloaded. DAL classes don't share the same exact same namespace, but share the same class name. As an example, I would prefer to use classes named DbObject that use namespaces like Vm\Db\MySql and Vm\Db\Oracle. Table classes don't have to be rewritten with a change in DB type. The appropriate DB type is determined via a single setting only. That setting is the only thing that should need to change to interchange DB types. Ideally, the setting check should occur only once per page load, but I'm flexible on that.

    Read the article

  • Another sound not working post

    - by Thomas Smart
    Tried all the other "sound not working" posts i think, lost count. purge/reinstall alsa and pulse, reboot, add user to audio group, various lines in the alsa config file such as "options snd-hda-intel model=" then tried different options like generic, auto, basic, default, etc. tried pulseaudio -k && sudo alsa force-reload a few times, with and without rebooting. Hardware: 16gb ram, core I7-4790, Intel Haswell mboard with onboard sound and graphics Multimedia: Audio Adapter: HDA-Intel-HDA Intel HDMI OS: Ubuntu server 14.04 with ubuntu-desktop installed. GUI sound settings lists only the dummy sound card alsamixer -c 0 ¦ Card: HDA Intel HDMI F1: Help ¦ ¦ Chip: Intel Haswell HDMI F2: System information ¦ ¦ View: F3:[Playback] F4: Capture F5: All F6: Select sound card ¦ ¦ Item: S/PDIF ¦ ¦ +--+ ¦ ¦ ¦OO¦ ¦ ¦ +--+ ¦ ¦ < S/PDIF > ¦ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 aplay -L default Playback/recording through the PulseAudio sound server null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server hdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 HDMI Audio Output dmix:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample mixing device dsnoop:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample snooping device hw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct hardware device without any conversions plughw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Hardware device with all software conversions cat /proc/asound/cards 0 [HDMI ]: HDA-Intel - HDA Intel HDMI HDA Intel HDMI at 0xf7d14000 irq 46 cat /proc/asound/devices 1: : sequencer 2: [ 0- 3]: digital audio playback 3: [ 0- 0]: hardware dependent 4: [ 0] : control 33: : timer mplayer -ao alsa:device=hdmi /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] alsa-lib: confmisc.c:768:(parse_card) cannot find card '1' [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:392:(snd_func_concat) error evaluating strings [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:1251:(snd_func_refer) error evaluating name [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory [AO_ALSA] alsa-lib: conf.c:4727:(snd_config_expand) Evaluate error: No such file or directory [AO_ALSA] alsa-lib: pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM hdmi [AO_ALSA] Playback open error: No such file or directory Failed to initialize audio driver 'alsa:device=hdmi' Could not open/initialize audio device -> no sound. Audio: no sound Video: no video Exiting... (End of file) mplayer -ao alsa:device=hw=0.3 /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] Format floatle is not supported by hardware, trying default. AO: [alsa] 44100Hz 2ch s16le (2 bytes per sample) Video: no video Starting playback... A: 0.4 (00.4) of 0.8 (00.7) 0.1% Exiting... (End of file) Thank you for your time and help :)

    Read the article

  • Deploying BAM Data Control Application to WLS server

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Typically we would test our ADF pages that use BAM Data control using integrated wls server (ADRS). If we have to deploy this same application to a standalone WLS we have to make sure we have the BAM server connection created in WLS.unless we do that we may face runtime errors.In Development mode of WLS(Reference) For development-mode WebLogic Server, you can set the mode to OVERWRITE to test user names and passwords. You can set the mode by running setDomainEnv.cmd or setDomainEnv.sh with the following option added to the command. Add the following to the JAVA_PROPERTIES entry in the <FMW_HOME>/user_projects/domains/<yourdomain>/bin/setDomainEnv.sh file: -Djps.app.credential.overwrite.allowed=true In Production mode of WLS Enable MDS Create and/or Register your MDS repository. For more details refer this Edit adf-config.xml from your application and add the following tag <adf-mds-config xmlns="http://xmlns.oracle.com/adf/mds/config">     <mds-config version="11.1.1.000">     <persistence-config>   <metadata-store-usages>     <metadata-store-usage default-cust-store="true" deploy-target="true" id="myRepos">     </metadata-store-usage>   </metadata-store-usages>   </persistence-config>           </mds-config>  </adf-mds-config>Deploy the application to WLS server after picking the appropriate repository during deployment from the MDS Repository dialog that pops up Enterprise Manager (Use these steps if using a version prior to 11gR1 PS1 release of JDeveloper) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, top dropdown select "System Mbean Browser->oracle.adf.share.connections->Server: AdminServer->Server: AdminServer->Application:<Appname>->ADFConnections"Right pane click "Operations->CreateConnection"Enter Connection Type as "BAMConnection"Enter the connection name same as the one defined in JdevClick "Invoke"Click "Return"Click on Operation->SaveNow in the ADFConnections in the navigator, select the connection just created and enter all the configuration details.Save and run the page. Enterprise Manager (Use these steps or the steps above if using 11gR1 PS1 or newer) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, click on "Application Deployment" to invoke to dropdown. In that select "ADF -> Configure ADF Connections"Select Connection Type as "BAM" from the drop downEnter Connection Type as to be the same as the one defined in JDevClick on "Create Connection". This should add a new row below under "BAM Connections"Select the new connection and click on the "Edit" icon. This should bring up a dialogSpecific appropriate values for all connection parameters - Username, password, BAM Server Host, BAM Server Port, Webtier Server Host, Webtier Server Port and BAM Webtier Protocol - and then click on OK to dismiss the dialogClick on "Apply"Run the page page.

    Read the article

  • Tunnel is up but cannot ping directly connected network

    - by drmanalo
    We configured a site-to-site VPN and here is the topology. I control the network on the left but not the one on the right. All devices in our network has public IPs. Server---ASA5505---Cisco887======Internet=====ASA5510---devices I can see the tunnel is up and can do extended ping using a loopback interface. From the 10.175 and 10.165 networks, they can also ping my loopback address. I can also dial in using a Cisco VPN client, and can connect to the devices on the right. #show crypto session Crypto session current status Interface: Vlan3 Profile: xxx-profile Session status: UP-ACTIVE Peer: 213.121.x.x port 500 IKEv1 SA: local 77.245.x.x/500 remote 213.121.x.x/500 Active IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.175.0.0/255.255.128.0 Active SAs: 0, origin: crypto map IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.165.0.0/255.255.192.0 Active SAs: 2, origin: crypto map #ping 10.165.29.39 source loopback 2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.165.29.39, timeout is 2 seconds: Packet sent with a source address of 10.0.20.1 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 16/17/20 ms My problem is the devices on the right cannot reach my server. They could only ping the loopback address and nothing else. I'm pasting some diagnostics related to routing thinking perhaps routing is my issue. I can paste all the running-config on my side of network if needed. #show ip int brief Interface IP-Address OK? Method Status Protocol ATM0 unassigned YES NVRAM administratively down down Ethernet0 unassigned YES NVRAM administratively down down FastEthernet0 unassigned YES unset up up connected to ASA FastEthernet1 unassigned YES unset administratively down down FastEthernet2 unassigned YES unset administratively down down FastEthernet3 unassigned YES unset up up Loopback1 10.0.20.65 YES NVRAM up up Loopback2 10.0.20.1 YES NVRAM up up Virtual-Template1 77.245.x.x YES unset up down Virtual-Template2 77.245.x.x YES unset up down Vlan1 unassigned YES unset down down Vlan3 77.245.x.x YES NVRAM up up connected to the Internet #show run | section ip route ip route 0.0.0.0 0.0.0.0 77.245.x.x ip route 213.121.240.36 255.255.255.255 Vlan3 #show access-list Extended IP access list 102 10 permit ip 10.0.20.0 0.0.0.15 10.175.0.0 0.0.127.255 (3332 matches) 20 permit ip 10.0.20.0 0.0.0.15 10.165.0.0 0.0.63.255 (3498 matches) #show vlan-switch VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 1 default active 3 VLAN0003 active Fa0, Fa1, Fa2, Fa3 1002 fddi-default act/unsup 1003 token-ring-default act/unsup 1004 fddinet-default act/unsup 1005 trnet-default act/unsup #show ip route Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP + - replicated route, % - next hop override Gateway of last resort is 77.245.x.x to network 0.0.0.0 S* 0.0.0.0/0 [1/0] via 77.245.x.x 10.0.0.0/8 is variably subnetted, 5 subnets, 3 masks C 10.0.20.0/28 is directly connected, Loopback2 L 10.0.20.1/32 is directly connected, Loopback2 C 10.0.20.64/28 is directly connected, Loopback1 L 10.0.20.65/32 is directly connected, Loopback1 S 10.165.0.0/18 [1/0] via 213.121.x.x 77.0.0.0/8 is variably subnetted, 3 subnets, 3 masks S 77.0.0.0/8 [1/0] via 77.245.x.x C 77.245.x.x/29 is directly connected, Vlan3 L 77.245.x.x/32 is directly connected, Vlan3 213.121.x.0/32 is subnetted, 1 subnets S 213.121.x.x is directly connected, Vlan3 I read some of the posts here which lead to NATing issue but I'not sure of my next step. Should I translate my public address to private and route it to the loopback address? (only guessing) CISCO VPN site to site Site-to-Site VPN between two ASA 5505s only working in one direction Hope someone could help. Thanks in advance!

    Read the article

  • C# 4.0: COM Interop Improvements

    - by Paulo Morgado
    Dynamic resolution as well as named and optional arguments greatly improve the experience of interoperating with COM APIs such as Office Automation Primary Interop Assemblies (PIAs). But, in order to alleviate even more COM Interop development, a few COM-specific features were also added to C# 4.0. Ommiting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. These parameters are typically not meant to mutate a passed-in argument, but are simply another way of passing value parameters. Specifically for COM methods, the compiler allows to declare the method call passing the arguments by value and will automatically generate the necessary temporary variables to hold the values in order to pass them by reference and will discard their values after the call returns. From the point of view of the programmer, the arguments are being passed by value. This method call: object fileName = "Test.docx"; object missing = Missing.Value; document.SaveAs(ref fileName, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); can now be written like this: document.SaveAs("Test.docx", Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value); And because all parameters that are receiving the Missing.Value value have that value as its default value, the declaration of the method call can even be reduced to this: document.SaveAs("Test.docx"); Dynamic Import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object form the context of the call, but has to explicitly perform a cast on the returned values to make use of that knowledge. These casts are so common that they constitute a major nuisance. To make the developer’s life easier, it is now possible to import the COM APIs in such a way that variants are instead represented using the type dynamic which means that COM signatures have now occurrences of dynamic instead of object. This means that members of a returned object can now be easily accessed or assigned into a strongly typed variable without having to cast. Instead of this code: ((Excel.Range)(excel.Cells[1, 1])).Value2 = "Hello World!"; this code can now be used: excel.Cells[1, 1] = "Hello World!"; And instead of this: Excel.Range range = (Excel.Range)(excel.Cells[1, 1]); this can be used: Excel.Range range = excel.Cells[1, 1]; Indexed And Default Properties A few COM interface features are still not available in C#. On the top of the list are indexed properties and default properties. As mentioned above, these will be possible if the COM interface is accessed dynamically, but will not be recognized by statically typed C# code. No PIAs – Type Equivalence And Type Embedding For assemblies indentified with PrimaryInteropAssemblyAttribute, the compiler will create equivalent types (interfaces, structs, enumerations and delegates) and embed them in the generated assembly. To reduce the final size of the generated assembly, only the used types and their used members will be generated and embedded. Although this makes development and deployment of applications using the COM components easier because there’s no need to deploy the PIAs, COM component developers are still required to build the PIAs.

    Read the article

  • SQL SERVER – Denali Feature – Zoom Query Editor

    - by pinaldave
    SQL Server next version ‘Denali’ is coming up with very neat feature which can be used while presentations, group discussion or for people who prefers large fonts. I have increased the font size to 400 percentage and for the same reason they are very large. You can adjust the font size which is convenient to you. One more reason to go for next version of SQL Server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • eth0 and eth1 both assigned same IP on boot

    - by Banjer
    I have a physical SLES 11 SP2 server on a Sun Fire x4140 that is giving me problems with networking upon reboot. The NICs are onboard. The networking appears successful during boot, but network services such as nfs fail hard. This is because eth0 and eth1 are both receiving the same configuration and are both ifup-ed. Once everything times out and I'm at the console, ifconfig shows that eth0 and eth1 are UP and running with the same IP. Attempting to ping anything in that subnet fails. Restarting the network service fixes the issue. eth0 is the correct NIC that should be configured as primary, per the MAC address. Question: Whats causing eth1 to be brought up with the same config as eth0?? I do not have a config script set up for eth1: banjer@harp:~> ls -la /etc/sysconfig/network/ total 104 drwxr-xr-x 6 root root 4096 Jun 11 12:21 . drwxr-xr-x 6 root root 4096 Apr 10 09:46 .. -rw-r--r-- 1 root root 13916 Apr 10 09:32 config -rw-r--r-- 1 root root 9952 Apr 10 09:36 dhcp -rw------- 1 root root 180 Jun 11 12:21 ifcfg-eth0 -rw------- 1 root root 180 Jun 11 12:21 ifcfg-eth3 -rw------- 1 root root 172 Feb 1 08:32 ifcfg-lo -rw-r--r-- 1 root root 29333 Feb 1 08:32 ifcfg.template drwxr-xr-x 2 root root 4096 Apr 10 09:32 if-down.d -rw-r--r-- 1 root root 239 Feb 1 08:32 ifroute-lo drwxr-xr-x 2 root root 4096 Apr 10 09:33 if-up.d drwx------ 2 root root 4096 May 5 2010 providers -rw-r--r-- 1 root root 25 Nov 16 2010 routes drwxr-xr-x 2 root root 4096 Apr 10 09:36 scripts On a side note, eth3 is also configured with an IP in a different subnet, but this has not posed any problems. FYI the kernel module being used is forcedeth. banjer@harp:~> sudo cat /etc/sysconfig/network/ifcfg-eth0 BOOTPROTO='static' BROADCAST='' ETHTOOL_OPTIONS='' IPADDR='172.21.64.25/20' MTU='' NAME='MCP55 Ethernet' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' USERCONTROL='no' ONBOOT="yes" Here's eth3 in case you need to see it: banjer@harp:~> sudo cat /etc/sysconfig/network/ifcfg-eth3 BOOTPROTO='static' BROADCAST='' ETHTOOL_OPTIONS='' IPADDR='172.11.200.4/24' MTU='' NAME='MCP55 Ethernet' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' USERCONTROL='no' ONBOOT="yes" Perhaps is something related to udev? 70-persistent-net-rules looks OK to me, but I may not understand it completely. banjer@harp:~> cat /etc/udev/rules.d/70-persistent-net.rules # This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4a", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4b", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4d", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth3" # PCI device 0x1077:0x3032 (qla3xxx) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:c1:dd:0e:34:6c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth4" Any other thoughts on what would cause this?

    Read the article

  • Banshee encountered a Fatal Error (sqlite error 11: database disk image is malformed)

    - by Nik
    I am running ubuntu 10.10 Maverick Meerkat, and recently I am helping in testing out indicator-weather using the unstable buids. However there was a bug which caused my system to freeze suddenly (due to indicator-weather not ubuntu) and the only way to recover is to do a hard reset of the system. This happened a couple of times. And when i tried to open banshee after a couple of such resets I get the following fatal error which forces me to quit banshee. The screenshot is not clear enough to read the error, so I am posting it below, An unhandled exception was thrown: Sqlite error 11: database disk image is malformed (SQL: BEGIN TRANSACTION; DELETE FROM CoreSmartPlaylistEntries WHERE SmartPlaylistID IN (SELECT SmartPlaylistID FROM CoreSmartPlaylists WHERE IsTemporary = 1); DELETE FROM CoreSmartPlaylists WHERE IsTemporary = 1; COMMIT TRANSACTION) at Hyena.Data.Sqlite.Connection.CheckError (Int32 errorCode, System.String sql) [0x00000] in <filename unknown>:0 at Hyena.Data.Sqlite.Connection.Execute (System.String sql) [0x00000] in <filename unknown>:0 at Hyena.Data.Sqlite.HyenaSqliteCommand.Execute (Hyena.Data.Sqlite.HyenaSqliteConnection hconnection, Hyena.Data.Sqlite.Connection connection) [0x00000] in <filename unknown>:0 Exception has been thrown by the target of an invocation. at System.Reflection.MonoCMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.MonoCMethod.Invoke (BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.ConstructorInfo.Invoke (System.Object[] parameters) [0x00000] in <filename unknown>:0 at System.Activator.CreateInstance (System.Type type, Boolean nonPublic) [0x00000] in <filename unknown>:0 at System.Activator.CreateInstance (System.Type type) [0x00000] in <filename unknown>:0 at Banshee.Gui.GtkBaseClient.Startup () [0x00000] in <filename unknown>:0 at Hyena.Gui.CleanRoomStartup.Startup (Hyena.Gui.StartupInvocationHandler startup) [0x00000] in <filename unknown>:0 .NET Version: 2.0.50727.1433 OS Version: Unix 2.6.35.27 Assembly Version Information: gkeyfile-sharp (1.0.0.0) Banshee.AudioCd (1.9.0.0) Banshee.MiniMode (1.9.0.0) Banshee.CoverArt (1.9.0.0) indicate-sharp (0.4.1.0) notify-sharp (0.4.0.0) Banshee.SoundMenu (1.9.0.0) Banshee.Mpris (1.9.0.0) Migo (1.9.0.0) Banshee.Podcasting (1.9.0.0) Banshee.Dap (1.9.0.0) Banshee.LibraryWatcher (1.9.0.0) Banshee.MultimediaKeys (1.9.0.0) Banshee.Bpm (1.9.0.0) Banshee.YouTube (1.9.0.0) Banshee.WebBrowser (1.9.0.0) Banshee.Wikipedia (1.9.0.0) pango-sharp (2.12.0.0) Banshee.Fixup (1.9.0.0) Banshee.Widgets (1.9.0.0) gio-sharp (2.14.0.0) gudev-sharp (1.0.0.0) Banshee.Gio (1.9.0.0) Banshee.GStreamer (1.9.0.0) System.Configuration (2.0.0.0) NDesk.DBus.GLib (1.0.0.0) gconf-sharp (2.24.0.0) Banshee.Gnome (1.9.0.0) Banshee.NowPlaying (1.9.0.0) Mono.Cairo (2.0.0.0) System.Xml (2.0.0.0) Banshee.Core (1.9.0.0) Hyena.Data.Sqlite (1.9.0.0) System.Core (3.5.0.0) gdk-sharp (2.12.0.0) Mono.Addins (0.4.0.0) atk-sharp (2.12.0.0) Hyena.Gui (1.9.0.0) gtk-sharp (2.12.0.0) Banshee.ThickClient (1.9.0.0) Nereid (1.9.0.0) NDesk.DBus.Proxies (0.0.0.0) Mono.Posix (2.0.0.0) NDesk.DBus (1.0.0.0) glib-sharp (2.12.0.0) Hyena (1.9.0.0) System (2.0.0.0) Banshee.Services (1.9.0.0) Banshee (1.9.0.0) mscorlib (2.0.0.0) Platform Information: Linux 2.6.35-27-generic i686 unknown GNU/Linux Disribution Information: [/etc/lsb-release] DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.10 DISTRIB_CODENAME=maverick DISTRIB_DESCRIPTION="Ubuntu 10.10" [/etc/debian_version] squeeze/sid Just to make it clear, this happened only after the hard resets and not before. I used to use banshee everyday and it worked perfectly. Can anyone help me fix this?

    Read the article

  • A Quick Primer on SharePoint Customization

    - by PeterBrunone
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} This one goes out to all the people who have been asked to change the way a SharePoint site looks.  Management wants to know how long it will take, and you can whip that out by tomorrow, right?  If you don't have time to prepare a treatise on what's involved, or if you just want to lend some extra weight to your case by quoting a blogger who was an MVP for seven years, then dive right in; this post is for you. There are three main components of SharePoint visual customization:   1)       Theme – A theme encompasses all the standardized text formatting and coloring (borders, fonts, etc), including the background images of various sections. All told, there could be around 50 images involved, and a few hundred CSS (style) classes.  Installing a theme once it’s been created is no great feat.  Given the number of pieces, of course, creating a new theme could take anywhere from a day to a week… once decisions have been made about the desired appearance. 2)      Master Page – A master page provides the framework for page layout.  This includes all the top and side menus, where content shows up, et cetera.  Master pages have been around for a long time in ASP.NET (Microsoft’s web development platform), and they do require some .NET programming knowledge.  Beyond that, in SharePoint, there are a few dozen controls which the system expects find on a given page.  They’re not all used at once, but if they’re not there when they’re needed, chaos ensues.  Estimating a custom master page is difficult, as it depends on the level of customization.  I’ve been on projects where I was brought in simply to fix some problems and add a few finishing touches, and it took 2-3 weeks.  Master page customization requires a large amount of testing time to make sure that the HTML, JavaScript, CSS, and control placement all work well together. 3)      Individual page layout – Each page (ideally) uses a master page for its template, but within the content areas defined by the master page, web parts can be added, removed, and configured from within the browser.  The wireframe that Brent provided could most likely be completed simply by manipulating the content on the home page in this fashion, and we had allowed about a day of effort for the task.  If needed, further functionality can be provided by an experienced ASP.NET developer; custom forms are a common example.  This of course is a bit more in-depth than simple content manipulation and could take several days per page (or more; there’s really no way to quantify this without a set of requirements).   That’s basically it.  To recap:  Fonts and coloring are done with themes, and can take anywhere from a day to a week to create (not counting creative time); required technical skills include HTML, CSS, and image manipulation.  Templated layout is done with master pages, and generally requires a developer familiar with both ASP.NET and SharePoint in particular; it can have far-reaching consequences depending on the complexity of the changes, and could add weeks or months to a project.  Page layout can be as simple as content manipulation in the web browser, taking a few hours per page, or it can involve more detail, like custom forms, and can require programming expertise and significantly more development time.

    Read the article

  • Increase Font Size of CHM files

    - by Rohit Gupta
    This may be the way to do it: 1. From the CHM Menu click Options 2. Click Internet Options 3. From Internet Options window click Accessibility button 4. Check box Ignore font styles specified on Web pages 5. Check box Ignore font sizes specified on Web pages 6. Click OK 7. Click Fonts button and select the font you want 8. Click OK That's it.

    Read the article

  • Enhanced Dynamic Filtering

    - by Ricardo Peres
    Remember my last post on dynamic filtering? Well, this time I'm extending the code in order to allow two levels of querying: Match type, represented by the following options: public enum MatchType { StartsWith = 0, Contains = 1 } And word match: public enum WordMatch { AnyWord = 0, AllWords = 1, ExactPhrase = 2 } You can combine the two levels in order to achieve the following combinations: MatchType.StartsWith + WordMatch.AnyWord Matches any record that starts with any of the words specified MatchType.StartsWith + WordMatch.AllWords Not available: does not make sense, throws an exception MatchType.StartsWith + WordMatch.ExactPhrase Matches any record that starts with the exact specified phrase MatchType.Contains + WordMatch.AnyWord Matches any record that contains any of the specified words MatchType.Contains + WordMatch.AllWords Matches any record that contains all of the specified words MatchType.Contains + WordMatch.ExactPhrase Matches any record that contains the exact specified phrase Here is the code: public static IList Search(IQueryable query, Type entityType, String dataTextField, String phrase, MatchType matchType, WordMatch wordMatch, Int32 maxCount) { String [] terms = phrase.Split(' ').Distinct().ToArray(); StringBuilder result = new StringBuilder(); PropertyInfo displayProperty = entityType.GetProperty(dataTextField); IList searchList = null; MethodInfo orderByMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "OrderBy").ToArray() [ 0 ].MakeGenericMethod(entityType, displayProperty.PropertyType); MethodInfo takeMethod = typeof(Queryable).GetMethod("Take", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(entityType); MethodInfo whereMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Where").ToArray() [ 0 ].MakeGenericMethod(entityType); MethodInfo distinctMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Distinct" && m.GetParameters().Length == 1).Single().MakeGenericMethod(entityType); MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(entityType); MethodInfo matchMethod = typeof(String).GetMethod ( (matchType == MatchType.StartsWith) ? "StartsWith" : "Contains", new Type [] { typeof(String) } ); MemberExpression member = Expression.MakeMemberAccess ( Expression.Parameter(entityType, "n"), displayProperty ); MethodCallExpression call = null; LambdaExpression where = null; LambdaExpression orderBy = Expression.Lambda ( member, member.Expression as ParameterExpression ); switch (matchType) { case MatchType.StartsWith: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: throw (new Exception("The match type StartsWith is not supported with word match AllWords")); } break; case MatchType.Contains: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.AndAlso ( where.Body, exp ), where.Parameters.ToArray() ); } break; } break; } query = orderByMethod.Invoke(null, new Object [] { query, orderBy }) as IQueryable; query = whereMethod.Invoke(null, new Object [] { query, where }) as IQueryable; if (maxCount != 0) { query = takeMethod.Invoke(null, new Object [] { query, maxCount }) as IQueryable; } searchList = toListMethod.Invoke(null, new Object [] { query }) as IList; return (searchList); } And this is how you'd use it: IQueryable query = ctx.MyEntities; IList list = Search(query, typeof(MyEntity), "Name", "Ricardo Peres", MatchType.Contains, WordMatch.ExactPhrase, 10 /*0 for all*/); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Web.config WordPress rewrite rules next to Magento

    - by Flo
    I've installed Magento on IIS in folder: E:\mydomain\wwwroot (I already have it all running correctly). I have no deeper folder magento, I placed all files directly in the wwwroot folder, so: wwwroot\app wwwroot\downloader wwwroot\errors wwwroot\includes etc... UPDATE: since I'm on IIS my .htaccess is ignored completely and my web.config rules are used instead. Here's my web.config in folder e:\mydomain\wwwroot: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Magento SEO: remove index.php from URL"> <match url="^(?!index.php)([^?#]*)(\\?([^#]*))?(#(.*))?" /> <conditions> <add input="{URL}" pattern="^/(media|skin|js)/" ignoreCase="false" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" ignoreCase="false" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" ignoreCase="false" negate="true" /> </conditions> <action type="Rewrite" url="index.php/{R:0}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Next, I wanted to install WordPress. I unzipped all files in folder e:\mydomain\wwwroot\wordpress Browsed to www.mydomain.com/wordpress/wp-admin/install.php, where I configured everything for my database. Everything was installed correctly. I then navigate to http://www.mydomain.com/wordpress/wp-login.php where I type my credentials. I seem to be logged in and am redirected to http://www.mydomain.com/wordpress/wp-admin/ But there I receive an empty page. I enabled detailed error message in IIS following this article: http://www.iis.net/learn/troubleshoot/diagnosing-http-errors/how-to-use-http-detailed-errors-in-iis I also checkec with Fiddler and see that I receive a 500 error: GET /wordpress/wp-admin/ HTTP/1.1 Host: www.mydomain.com Connection: keep-alive Cache-Control: max-age=0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.76 Safari/537.36 Referer: http://www.mydomain.com/wordpress/wp-login.php Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,nl;q=0.6 Cookie: wordpress_fabec4083cf12d8de89c98e8aef4b7e3=floran%7C1381236774%7C2d8edb4fc6618f290fadb49b035cad31; wordpress_test_cookie=WP+Cookie+check; wordpress_logged_in_fabec4083cf12d8de89c98e8aef4b7e3=floran%7C1381236774%7Cbf822163926b8b8df16d0f1fefb6e02e HTTP/1.1 500 Internal Server Error Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: PHP/5.4.14 X-Powered-By: ASP.NET Date: Sun, 06 Oct 2013 12:56:03 GMT Content-Length: 0 My WordPress web.config in folder e:\mydomain\wwwroot\wordpress contains: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="wordpress" patternSyntax="Wildcard"> <match url="*"/> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true"/> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true"/> </conditions> <action type="Rewrite" url="index.php"/> </rule></rules> </rewrite> </system.webServer> </configuration> I also want my WordPress articles to be available on www.mydomain.com/blog instead of www.mydomain.com/wordpress Ofcourse my admin links for Magento and Wordpress should also work. How can I configure my web.config files to achieve the above?

    Read the article

  • Browser History ASP.Net AJAX: Microsoft.Web.Preview

    - by Narendra Tiwari
    I remember in 2006 we were working on a portal for our client Venetian, Las Vegas and the portal is full of AJAX features. One of my friend facing a challange to retain browser history with all AJAX operation. In terms of user experience it is an important aspect which could not be avoided in that scenario. Well that time we have made some workarounds to achieve the same but that may not be the perfect solution. Ok.. Now with Microsoft AJAX there are a lot of such features can be achieved with optimum efficiency. Microsoft AJAX has grown its features over the past few years. Microsoft.Web.Preview.dll is an addon in conjunction with ASP.Net AJAX. It contains a control named "History" for that purpose. Source code:- http://download.microsoft.com/download/8/3/1/831ffcd7-c571-4075-b8fa-6ff678794f60/CS-ASP-ASPBrowserHistoryinAJAX_cs.zip Below is a small sample to demonstrate the control. 1/ Get dll from the above source code bin, and add reference to your web application. 2/ Rightclick on toolbox panel and Choose Item, browse assembly. now you will be able to see History control. 3/ Add below section group in web.config under <configSections> <sectionGroup name="microsoft.web.preview" type="Microsoft.Web.Preview.Configuration.PreviewSectionGroup, Microsoft.Web.Preview"> <section name="search" type="Microsoft.Web.Preview.Configuration.SearchSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="searchSiteMap" type="Microsoft.Web.Preview.Configuration.SearchSiteMapSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="diagnostics" type="Microsoft.Web.Preview.Configuration.DiagnosticsSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> 4/ Now create a simple webpage a textbox (txt1), button (btn1)  in an updatePanel with History control (History1). We will fill in text box and post the fom by clicking button a few times then verify if the browse history is retained. Remember button and textbox must be inside UpdatePanel and History control outside the UpdatePanel. <%@Page Language="C#" AutoEventWireup="true" CodeFile="History.aspx.cs" Inherits="History" %> <%@ Register Assembly="Microsoft.Web.Preview" Namespace="Microsoft.Web.Preview.UI.Controls" TagPrefix="cc1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true"></asp:ScriptManager> <div> <cc1:History ID="History1" runat="server" OnNavigate="History1_Navigate"> </cc1:History> <asp:UpdatePanel ID="up1" runat="server"> <ContentTemplate> <asp:TextBox ID="txt1" runat="server"></asp:TextBox><br /> <asp:Button ID="btn1" runat="server" Text="Test" OnClick="btn1_Click" /> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="History1" /> </Triggers> </asp:UpdatePanel> </div> </form> </body> </html> 5/ Below code to add the textbox value in history everytime we post back using btn1 click.  protected void btn1_Click(object sender, EventArgs e) { History1.AddHistoryPoint("txtState",txt1.Text); } 6/ and finally Navigate event of History control protected void History1_Navigate(object sender, Microsoft.Web.Preview.UI.Controls.HistoryEventArgs args) { string strState = string.Empty; if (args.State.ContainsKey("txtState")) { strState = (string)args.State["txtState"]; } txt1.Text = strState; } Now all set to go :) Reference: http://www.dotnetglobe.com/2008/08/using-asp.html

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >