Search Results

Search found 915 results on 37 pages for 'restrictions'.

Page 37/37 | < Previous Page | 33 34 35 36 37 

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • Series of abstract classes and NHibernate

    - by Chris Cowdery-Corvan
    Hello, and first off thanks for your time to look at this. For a research project I'm working on, I have a somewhat complex design (which I've been given) to persist to a database via NHibernate. Here's an example of the class hierarchy: TransitStrategy, TransportationCompany and TransportationLocation are all abstract classes. The XML configuration I have is presently: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Vacationizer" namespace="Vacationizer.Domain.Transit"> <class name="TransitStrategy"> <id name="TransitStrategyId"> <generator class="guid" /> </id> <property name="Restrictions" /> <joined-subclass name="Flight" table="Flight_TransitStrategy"> <key column="TransitStrategyId" /> <property name="DepartingAirport" /> <property name="ArrivingAirport" /> <property name="Airline" /> <property name="FlightNumber" /> <property name="FlightArrivalTime" /> <property name="FlightDepartureTime" /> </joined-subclass> <joined-subclass name="RentalCar" table="RentalCar_TransitStrategy"> <key column="TransitStrategyId" /> <property name="RentalCarBranch" /> <property name="CarMake" /> <property name="CarModel" /> <property name="CarYear" /> <property name="CarColor" /> <property name="RentalBegins" /> <property name="RentalEnds" /> </joined-subclass> </class> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Vacationizer" namespace="Vacationizer.Domain.Transit"> <class name="TransportationCompany"> <id name="TransportationCompanyId"> <generator class="guid" /> </id> <property name="Name" /> <property name="Reviews" /> <property name="Website" /> <property name="Photo" /> <joined-subclass name="Airline" table="Airline_TransportationCompany"> <key column="TransportationLocationId" /> </joined-subclass> <joined-subclass name="RentalCarAgency" table="RentalCarAgency_TransportationCompany"> <key column="TransportationLocationId" /> </joined-subclass> </class> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Vacationizer" namespace="Vacationizer.Domain.Transit"> <class name="TransportationLocation"> <id name="TransportationLocationId"> <generator class="guid" /> </id> <property name="Name" /> <property name="Image" /> <property name="Geolocation" /> <property name="Reviews" /> <!-- <property name="HoursOpen" />--> <property name="PhoneNumber" /> <property name="FaxNumber" /> <joined-subclass name="Airport" table="Airport_TransportationLocation"> <key column="TransportationLocationId" /> <property name="AirportCode" /> <property name="Website" /> </joined-subclass> <joined-subclass name="RentalCarBranch" table="RentalCarBranch_TransportationLocation"> <key column="TransitStrategyId" /> <property name="Agency" /> </joined-subclass> </class> However, whenever I try to use this schema I get this error/stack trace: ------ Test started: Assembly: Vacationizer.Tests.dll ------ TestCase 'M:Vacationizer.Tests.VacationRepository_Fixture.TestFixtureSetUp' failed: Could not compile the mapping document: Vacationizer.Mappings.TransitStrategy.hbm.xml NHibernate.MappingException: Could not compile the mapping document: Vacationizer.Mappings.TransitStrategy.hbm.xml ---> NHibernate.MappingException: Problem trying to set property type by reflection ---> NHibernate.MappingException: class Vacationizer.Domain.Transit.RentalCar, Vacationizer, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null not found while looking for property: RentalCarBranch ---> NHibernate.PropertyNotFoundException: Could not find a getter for property 'RentalCarBranch' in class 'Vacationizer.Domain.Transit.RentalCar' at NHibernate.Properties.BasicPropertyAccessor.GetGetter(Type type, String propertyName) at NHibernate.Util.ReflectHelper.ReflectedPropertyClass(String className, String name, String accessorName) --- End of inner exception stack trace --- at NHibernate.Util.ReflectHelper.ReflectedPropertyClass(String className, String name, String accessorName) at NHibernate.Mapping.SimpleValue.SetTypeUsingReflection(String className, String propertyName, String accesorName) --- End of inner exception stack trace --- at NHibernate.Mapping.SimpleValue.SetTypeUsingReflection(String className, String propertyName, String accesorName) at NHibernate.Cfg.XmlHbmBinding.ClassBinder.CreateProperty(IValue value, String propertyName, String className, XmlNode subnode, IDictionary`2 inheritedMetas) at NHibernate.Cfg.XmlHbmBinding.ClassBinder.PropertiesFromXML(XmlNode node, PersistentClass model, IDictionary`2 inheritedMetas, UniqueKey uniqueKey, Boolean mutable, Boolean nullable, Boolean naturalId) at NHibernate.Cfg.XmlHbmBinding.JoinedSubclassBinder.HandleJoinedSubclass(PersistentClass model, XmlNode subnode, IDictionary`2 inheritedMetas) at NHibernate.Cfg.XmlHbmBinding.ClassBinder.PropertiesFromXML(XmlNode node, PersistentClass model, IDictionary`2 inheritedMetas, UniqueKey uniqueKey, Boolean mutable, Boolean nullable, Boolean naturalId) at NHibernate.Cfg.XmlHbmBinding.RootClassBinder.Bind(XmlNode node, HbmClass classSchema, IDictionary`2 inheritedMetas) at NHibernate.Cfg.XmlHbmBinding.MappingRootBinder.AddRootClasses(XmlNode parentNode, IDictionary`2 inheritedMetas) at NHibernate.Cfg.XmlHbmBinding.MappingRootBinder.Bind(XmlNode node) at NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc) --- End of inner exception stack trace --- at NHibernate.Cfg.Configuration.LogAndThrow(Exception exception) at NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc) at NHibernate.Cfg.Configuration.ProcessMappingsQueue() at NHibernate.Cfg.Configuration.AddDocumentThroughQueue(NamedXmlDocument document) at NHibernate.Cfg.Configuration.AddXmlReader(XmlReader hbmReader, String name) at NHibernate.Cfg.Configuration.AddInputStream(Stream xmlInputStream, String name) at NHibernate.Cfg.Configuration.AddResource(String path, Assembly assembly) at NHibernate.Cfg.Configuration.AddAssembly(Assembly assembly) at NHibernate.Cfg.Configuration.AddAssembly(String assemblyName) at NHibernate.Cfg.Configuration.DoConfigure(IHibernateConfiguration hc) at NHibernate.Cfg.Configuration.Configure() VacationRepository_Fixture.cs(24,0): at Vacationizer.Tests.VacationRepository_Fixture.TestFixtureSetUp() 0 passed, 1 failed, 0 skipped, took 8.38 seconds (Ad hoc). Any ideas on how I can implement this differently? Thanks very much!

    Read the article

  • How to send mail from ASP.NET with IIS6 SMTP in a dedicated server?

    - by Julio César
    Hi. I'm trying to configure a dedicated server that runs ASP.NET to send mail through the local IIS SMTP server but mail is getting stuck in the Queue folder and doesn't get delivered. I'm using this code in an .aspx page to test: <%@ Page Language="C#" AutoEventWireup="true" %> <% new System.Net.Mail.SmtpClient("localhost").Send("[email protected]", "[email protected]", "testing...", "Hello, world.com"); %> Then, I added the following to the Web.config file: <system.net> <mailSettings> <smtp> <network host="localhost"/> </smtp> </mailSettings> </system.net> In the IIS Manager I've changed the following in the properties of the "Default SMTP Virtual Server". General: [X] Enable Logging Access / Authentication: [X] Windows Integrated Authentication Access / Relay Restrictions: (o) Only the list below, Granted 127.0.0.1 Delivery / Advanced: Fully qualified domain name = thedomain.com Finally, I run the SMTPDiag.exe tool like this: C:\>smtpdiag.exe [email protected] [email protected] Searching for Exchange external DNS settings. Computer name is THEDOMAIN. Failed to connect to the domain controller. Error: 8007054b Checking SOA for gmail.com. Checking external DNS servers. Checking internal DNS servers. SOA serial number match: Passed. Checking local domain records. Checking MX records using TCP: thedomain.com. Checking MX records using UDP: thedomain.com. Both TCP and UDP queries succeeded. Local DNS test passed. Checking remote domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Remote DNS test passed. Checking MX servers listed for [email protected]. Connecting to gmail-smtp-in.l.google.com [209.85.199.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Connecting to gmail-smtp-in.l.google.com [209.85.199.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [209.85.135.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [209.85.135.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt1.gmail-smtp-in.l.google.com [209.85.133.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt1.gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [74.125.79.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [74.125.79.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt1.gmail-smtp-in.l.google.com [209.85.133.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt1.gmail-smtp-in.l.google.com. Connecting to gsmtp183.google.com [64.233.183.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gsmtp183.google.com. Connecting to gsmtp147.google.com [209.85.147.27] on port 25. Connecting to the server failed. Error: 10051 Failed to submit mail to gsmtp147.google.com. I'm using ASP.NET 2.0, Windows 2003 Server and the IIS that comes with it. Can you tell me what else to change to fix the problem? Thanks @mattlant This is a dedicated server that's why I'm installing the SMTP manually. EDIT: I use exchange so its a little different, but its called a smart host in exchange, but in plain SMTP service config i think its called something else. Cant remember exactly the setting name. Thank you for pointing me at the Smart host field. Mail is getting delivered now. In the Default SMTP Virtual Server properties, the Delivery tab, click Advanced and fill the "Smart host" field with the address that your provider gives you. In my case (GoDaddy) it was k2smtpout.secureserver.net. More info here: http://help.godaddy.com/article/1283

    Read the article

  • Windows 2008 RenderFarm Service: CreateProcessAsUser "Session 0 Isolation" and OpenGL

    - by holtavolt
    Hello, I have a legacy Windows server service and (spawned) application that works fine in XP-64 and W2K3, but fails on W2K8. I believe it is because of the new "Session 0 isolation " feature. (Note: As a StackOverflow newbie I'm being limited to one link in this post, so you'll need to scroll to bottom to lookup the links for '' items)* Consequently, I'm looking for code samples/security settings mojo that let you create a new process from a windows service for Windows 2008 Server such that I can restore (and possibly surpass) the previous behavior. I need a solution that: Creates the new process in a non-zero session to get around session-0 isolation restrictions (no access to graphics hardware from session 0) - the official MS line on this is: Because Session 0 is no longer a user session, services that are running in Session 0 do not have access to the video driver. This means that any attempt that a service makes to render graphics fails. Querying the display resolution and color depth in Session 0 reports the correct results for the system up to a maximum of 1920x1200 at 32 bits per pixel. The new process gets a windows station/desktop (e.g. winsta0/default) that can be used to create windows DCs. I've found a solution (that launches OK in an interactive session) for this here: *(Starting an Interactive Client Process in C++ - 2) The windows DC, when used as the basis for an *(OpenGL DescribePixelFormat enumeration - 3), is able to find and use the hardware-accelerated format (on a system appropriately equipped with OpenGL hardware.) Note that our current solution works OK on XP-64 and W2K3, except if a terminal services session is running (VNC works fine.) A solution that also allowed the process to work (i.e. run with OpenGL hardware acceleration even when a terminal services session is open) would be fanastic, although not required. I'm stuck at item #1 currently, and although there are some similar postings that discuss this (like *(this -4), and *(this - 5) - they are not suitable solutions, as there is no guarantee of a user session logged in already to "take" a session id from, nor am I running from a LocalSystem account (I'm running from a domain account for the service, for which I can adjust the privileges of, within reason, although I'd prefer to not have to escalate priorities to include SeTcbPrivileges.) For instance - here's a stub that I think should work, but always returns an error 1314 on the SetTokenInformation call (even though the AdjustTokenPrivileges returned no errors) I've used some alternate strategies involving "LogonUser" as well (instead of opening the existing process token), but I can't seem to swap out the session id. I'm also dubious about using the WTSActiveConsoleSessionId in all cases (for instance, if no interactive user is logged in) - although a quick test of the service running with no sessions logged in seemed to return a reasonable session value (1). I’ve removed error handling for ease of reading (still a bit messy - apologies) //Also tried using LogonUser(..) here OpenProcessToken(GetCurrentProcess(), TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE, &hToken) GetTokenInformation( hToken, TokenSessionId, &logonSessionId, sizeof(DWORD), &dwTokenLength ) DWORD consoleSessionId = WTSGetActiveConsoleSessionId(); /* Can't use this - requires very elevated privileges (LOCAL only, SeTcbPrivileges as well) if( !WTSQueryUserToken(consoleSessionId, &hToken)) ... */ DuplicateTokenEx(hToken, (TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE), NULL, SecurityIdentification, TokenPrimary, &hDupToken)) // Look up the LUID for the TCB Name privilege. LookupPrivilegeValue(NULL, SE_TCB_NAME, &tp.Privileges[0].Luid)) // Enable the TCB Name privilege in the token. tp.PrivilegeCount = 1; tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if (!AdjustTokenPrivileges(hDupToken, FALSE, &tp, sizeof(TOKEN_PRIVILEGES), NULL, 0)) { DisplayError("AdjustTokenPrivileges"); ... } if (GetLastError() == ERROR_NOT_ALL_ASSIGNED) { DEBUG( "Token does not have the necessary privilege.\n"); } else { DEBUG( "No error reported from AdjustTokenPrivileges!\n"); } // Never errors here DEBUG(LM_INFO, "Attempting setting of sessionId to: %d\n", consoleSessionId ); if (!SetTokenInformation(hDupToken, TokenSessionId, &consoleSessionId, sizeof(DWORD))) *** ALWAYS FAILS WITH 1314 HERE *** All the debug output looks fine up until the SetTokenInformation call - I see session 0 is my current process session, and in my case, it's trying to set session 1 (the result of the WTSGetActiveConsoleSessionId). (Note that I'm logged into the W2K8 box via VNC, not RDC) So - a the questions: Is this approach valid, or are all service-initiated processes restricted to session 0 intentionally? Is there a better approach (short of "Launch on logon" and auto-logon for the servers?) Is there something wrong with this code, or a different way to create a process token where I can swap out the session id to indicate I want to spawn the process in a new session? I did try using LogonUser instead of OpenProcessToken, but that didn't work either. (I don't care if all spawned processes share the same non-zero session or not at this point.) Any help much appreciated - thanks! (You need to replace the 'zttp' with 'http' - StackOverflow restriction on one link in my newbie post) 2: http://msdn.microsoft.com/en-us/library/aa379608(VS.85).aspx 3: http://www.opengl.org/resources/faq/technical/mswindows.htm 4: http://stackoverflow.com/questions/2237696/creating-a-process-in-a-non-zero-session-from-a-service-in-windows-2008-server 5: http://stackoverflow.com/questions/1602996/how-can-i-lauch-a-process-which-has-a-ui-from-windows-service

    Read the article

  • Is there an equivalent to Java's ClassFileTransformer in .NET? (a way to replace a class)

    - by Alix
    I've been searching for this for quite a while with no luck so far. Is there an equivalent to Java's ClassFileTransformer in .NET? Basically, I want to create a class CustomClassFileTransformer (which in Java would implement the interface ClassFileTransformer) that gets called whenever a class is loaded, and is allowed to tweak it and replace it with the tweaked version. I know there are frameworks that do similar things, but I was looking for something more straightforward, like implementing my own ClassFileTransformer. Is it possible? EDIT #1. More details about why I need this: Basically, I have a C# application and I need to monitor the instructions it wants to run in order to detect read or write operations to fields (operations Ldfld and Stfld) and insert some instructions before the read/write takes place. I know how to do this (except for the part where I need to be invoked to replace the class): for every method whose code I want to monitor, I must: Get the method's MethodBody using MethodBase.GetMethodBody() Transform it to byte array with MethodBody.GetILAsByteArray(). The byte[] it returns contains the bytecode. Analyse the bytecode as explained here, possibly inserting new instructions or deleting/modifying existing ones by changing the contents of the array. Create a new method and use the new bytecode to create its body, with MethodBuilder.CreateMethodBody(byte[] il, int count), where il is the array with the bytecode. I put all these tweaked methods in a new class and use the new class to replace the one that was originally going to be loaded. An alternative to replacing classes would be somehow getting notified whenever a method is invoked. Then I'd replace the call to that method with a call to my own tweaked method, which I would tweak only the first time is invoked and then I'd put it in a dictionary for future uses, to reduce overhead (for future calls I'll just look up the method and invoke it; I won't need to analyse the bytecode again). I'm currently investigating ways to do this and LinFu looks pretty interesting, but if there was something like a ClassFileTransformer it would be much simpler: I just rewrite the class, replace it, and let the code run without monitoring anything. An additional note: the classes may be sealed. I want to be able to replace any kind of class, I cannot impose restrictions on their attributes. EDIT #2. Why I need to do this at runtime. I need to monitor everything that is going on so that I can detect every access to data. This applies to the code of library classes as well. However, I cannot know in advance which classes are going to be used, and even if I knew every possible class that may get loaded it would be a huge performance hit to tweak all of them instead of waiting to see whether they actually get invoked or not. POSSIBLE (BUT PRETTY HARDCORE) SOLUTION. In case anyone is interested (and I see the question has been faved, so I guess someone is), this is what I'm looking at right now. Basically I'd have to implement the profiling API and I'll register for the events that I'm interested in, in my case whenever a JIT compilation starts. An extract of the blogpost: In your ICorProfilerCallback2::ModuleLoadFinished callback, you call ICorProfilerInfo2::GetModuleMetadata to get a pointer to a metadata interface on that module. QI for the metadata interface you want. Search MSDN for "IMetaDataImport", and grope through the table of contents to find topics on the metadata interfaces. Once you're in metadata-land, you have access to all the types in the module, including their fields and function prototypes. You may need to parse metadata signatures and this signature parser may be of use to you. In your ICorProfilerCallback2::JITCompilationStarted callback, you may use ICorProfilerInfo2::GetILFunctionBody to inspect the original IL, and ICorProfilerInfo2::GetILFunctionBodyAllocator and then ICorProfilerInfo2::SetILFunctionBody to replace that IL with your own. The great news: I get notified when a JIT compilation starts and I can replace the bytecode right there, without having to worry about replacing the class, etc. The not-so-great news: you cannot invoke managed code from the API's callback methods, which makes sense but means I'm on my own parsing the IL code, etc, as opposed to be able to use Cecil, which would've been a breeze. I don't think there's a simpler way to do this without using AOP frameworks (such as PostSharp). If anyone has any other idea please let me know. I'm not marking the question as answered yet.

    Read the article

  • Ldap invalid credentials not loading authentication failure url

    - by Murari
    Able to do the custom ldap authentication for external db authorities. But when i am trying to test wrong password the authentication failure url is not showing instead my browser prints the exception details.Below is my securitycontext.xml and exption given <http auto-config="false" access-decision-manager-ref="accessDecisionManager" access-denied-page="/accessDenied.jsp"> <!-- Restrict access to ALL other pages --> <intercept-url pattern="/index.jsp" filters="none" /> <!-- Don't set any role restrictions on login.jsp --> <intercept-url pattern="/**" access="IS_AUTHENTICATED_ANONYMOUSLY" /> <intercept-url pattern="/service/**" access="PRIV_Report User, PRIV_305" /> <logout logout-success-url="/index.jsp" /> <form-login authentication-failure-url="/index.jsp?error=1" default-target-url="/home.jsp" /> <anonymous/> </http> <b:bean id="accessDecisionManager" class="org.springframework.security.vote.AffirmativeBased"> <b:property name="decisionVoters"> <b:list> <b:ref bean="roleVoter" /> <b:ref bean="authenticatedVoter" /> </b:list> </b:property> </b:bean> <b:bean id="roleVoter" class="org.springframework.security.vote.RoleVoter"> <b:property name="rolePrefix" value="PRIV_" /> </b:bean> <b:bean id="authenticatedVoter" class="org.springframework.security.vote.AuthenticatedVoter"> </b:bean> <b:bean id="contextSource" class="org.springframework.security.ldap.DefaultSpringSecurityContextSource"> <b:constructor-arg value="ldap://mydomain:389" /> </b:bean> <b:bean id="ldapTemplate" class="org.springframework.ldap.core.LdapTemplate"> <b:constructor-arg ref="contextSource" /> </b:bean> <b:bean id="ldapAuthenticationProvider" class="com.zo.sas.gwt.security.login.server.SASLdapAuthenticationProvider"> <b:property name="authenticator" ref="ldapAuthenticator" /> <custom-authentication-provider /> </b:bean> <b:bean id="ldapAuthenticator" class="com.zo.sas.gwt.security.login.server.SASAuthenticator"> <b:property name="contextSource" ref="contextSource" /> <b:property name="userDnPatterns"> <b:value>uid={0},OU=People</b:value> </b:property> </b:bean> and my exception logs..... org.springframework.ldap.AuthenticationException: [LDAP: error code 49 - Invalid Credentials]; nested exception is javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials] org.springframework.ldap.support.LdapUtils.convertLdapException(LdapUtils.java:180) org.springframework.ldap.core.support.AbstractContextSource.createContext(AbstractContextSource.java:266) org.springframework.ldap.core.support.AbstractContextSource.getContext(AbstractContextSource.java:106) com.zo.sas.gwt.security.login.server.SASAuthenticator.authenticate(SASAuthenticator.java:55) com.zo.sas.gwt.security.login.server.SASLdapAuthenticationProvider.authenticate(SASLdapAuthenticationProvider.java:45) org.springframework.security.providers.ProviderManager.doAuthentication(ProviderManager.java:188) org.springframework.security.AbstractAuthenticationManager.authenticate(AbstractAuthenticationManager.java:46) org.springframework.security.ui.webapp.AuthenticationProcessingFilter.attemptAuthentication(AuthenticationProcessingFilter.java:82) org.springframework.security.ui.AbstractProcessingFilter.doFilterHttp(AbstractProcessingFilter.java:258) org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) org.springframework.security.ui.logout.LogoutFilter.doFilterHttp(LogoutFilter.java:89) org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) org.springframework.security.context.HttpSessionContextIntegrationFilter.doFilterHttp(HttpSessionContextIntegrationFilter.java:235) org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) org.springframework.security.util.FilterChainProxy.doFilter(FilterChainProxy.java:175) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:183) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:138) This is my index.jsp <html> <script type="text/javascript" language="javascript"> var dictionary = { loginErr: "${SPRING_SECURITY_LAST_EXCEPTION.message}", error: "${param.error}" }; </script> <head> </head> <body > <iframe src="javascript:''" id="__gwt_historyFrame" style="width:0;height:0;border:0"></iframe> <script type="text/javascript" language="javascript" src="com.zo.sas.gwt.sasworkflow.home.Home.nocache.js"></script> </body> </html>

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Users using Perl script to bypass Squid Proxy

    - by mk22
    The users on our network have been using a perl script to bypass our Squid proxy restrictions. Is there any way we can block this script from working?? #!/usr/bin/perl ######################################################################## # (c) 2008 Indika Bandara Udagedara # [email protected] # http://indikabandara19.blogspot.com # # ---------- # LICENCE # ---------- # This work is protected under GNU GPL # It simply says # " you are hereby granted to do whatever you want with this # except claiming you wrote this." # # # ---------- # README # ---------- # A simple tool to download via http proxies which enforce a download # size limit. Requires curl. # This is NOT a hack. This uses the absolutely legal HTTP/1.1 spec # Tested only for squid-2.6. Only squids will work with this(i think) # Please read the verbose README provided kindly by Rahadian Pratama # if u r on cygwin and think this documentation is not enough :) # # The newest version of pget is available at # http://indikabandara.no-ip.com/~indika/pget # # ---------- # USAGE # ---------- # + Edit below configurations(mainly proxy) # + First run with -i <file> giving a sample file of same type that # you are going to download. Doing this once is enough. # eg. to download '.tar' files first run with # pget -i my.tar ('my.tar' should be a real file) # + Run with # pget -g <URL> # # ######################################################################## ######################################################################## # CONFIGURATIONS - CHANGE THESE FREELY ######################################################################## # *magic* file # pls set absolute path if in cygwin my $_extFile = "./pget.ext" ; # download in chunks of below size my $_chunkSize = 1024*1024; # in Bytes # the proxy that troubles you my $_proxy = "192.168.0.2:3128"; # proxy URL:port my $_proxy_auth = "user:pass"; # proxy user:pass # whereis curl # pls set absolute path if in cygwin my $_curl = "/usr/bin/curl"; ######################################################################## # EDIT BELOW ONLY IF YOU KNOW WHAT YOU ARE DOING ######################################################################## use warnings; my $_version = "0.1.0"; PrintBanner(); if (@ARGV == 0) { PrintHelp(); exit; } PrimaryValidations(); my $val; while(scalar(@ARGV)) { my $arg = shift(@ARGV); if($arg eq '-h') { PrintHelp(); } elsif($arg eq '-i') { $val = shift(@ARGV); if (!defined($val)) { printf("-i option requires a filename\n"); exit; } Init($val); } elsif($arg eq '-g') { $val = shift(@ARGV); if (!defined($val)) { printf("-g option requires a URL\n"); exit; } GetURL($val); } elsif($arg eq '-c') { $val = shift(@ARGV); if (!defined($val)) { printf("-c option requires a URL\n"); exit; } ContinueURL($val); } else { printf ("Unknown option %s\n", $arg); PrintHelp(); } } sub GetURL { my ($URL) = @_; chomp($URL); my $fileName = GetFileName($URL); my %mapExt; my $first; my $readLen; my $ext = GetExt($fileName); ReadMap($_extFile, \%mapExt); if ( exists($mapExt{$ext})) { $first = $mapExt{$ext}; GetFile($URL, $first, $fileName, 0); } else { die "Unknown ext in $fileName. Rerun with -i <fileName>"; } } sub ContinueURL { my ($URL) = @_; chomp($URL); my $fileName = GetFileName($URL); my $fileSize = 0; $fileSize = -s $fileName; printf("Size = %d\n", $fileSize); my $first = -1; if ( $fileSize > 0 ) { $fileSize -= 1; GetFile($URL, $first, $fileName, $fileSize); } else { GetURL($URL); } } sub Init { my ($fileName) = @_; my ($key, $value); my %mapExt; my $ext = GetExt($fileName); if ( $ext eq "") { die "Cannot get ext of \'$fileName\'"; } ReadMap($_extFile, \%mapExt); my $b = GetFirst($fileName); $mapExt{$ext} = $b; WriteMap($_extFile, \%mapExt); print "I handle\n"; while ( ($key, $value) = each(%mapExt) ) { print "\t$key -> $value\n"; } } sub GetExt { my ($name) = @_; my @x = split(/\./, $name); my $ext = ""; if (@x != 1) { $ext = pop @x; } return $ext; } sub ReadMap { my($fileName, $mapRef) = @_; my $f; my @arr; open($f, '<', $fileName) or die "Couldn't open $fileName"; my %map = %{$mapRef}; while (<$f>) { my $line = $_; chomp($line); @arr = split(/[ \t]+/, $line, 2); $mapRef->{ $arr[0]} = $arr[1]; } printf("known ext\n"); while (($key, $value) = each(%$mapRef)) { print("$key, $value\n"); } close($f); } sub WriteMap { my ($fileName, $mapRef) = @_; my $f; my @arr; open($f, '>', $fileName) or die "Couldn't open $fileName"; my ($k, $v); while( ($k, $v) = each(%{$mapRef})) { print $f "$k" . "\t$v\n"; } close($f); } sub PrintHelp { print "usage: -h Print this help -i <filename> Initialize for this filetype -g <URL> Get this URL\n -c <URL> Continue this URL\n" } sub GetFirst { my ($fileName) = @_; my $f; open($f, "<$fileName") or die "Couldn't open $fileName"; my $buffer = ""; my $first = -1; binmode($f); sysread($f, $buffer, 1, 0); close($f); $first = ord($buffer); return $first; } sub GetFirstFromMap { } sub GetFileName { my ($URL) = @_; my @x = split(/\//, $URL); my $fileName = pop @x; return $fileName; } sub GetChunk { my ($URL, $file, $offset, $readLen) = @_; my $end = $offset + $_chunkSize - 1; my $curlCmd = "$_curl -x $_proxy -u $_proxy_auth -r $offset-$end -# \"$URL\""; print "$curlCmd\n"; my $buff = `$curlCmd`; ${$readLen} = syswrite($file, $buff, length($buff)); } sub GetFile { my ($URL, $first, $outFile, $fileSize) = @_; my $readLen = 0; my $start = $fileSize + 1; my $file; open($file, "+>>$outFile") or die "Couldn't open $outFile to write"; if ($fileSize <= 0) { my $uc = pack("C", $first); syswrite ($file, $uc, 1); } do { GetChunk($URL, $file, $start ,\$readLen); $start = $start + $_chunkSize; $fileSize += $readLen; }while ($readLen == $_chunkSize); printf("Downloaded %s(%d bytes).\n", $outFile, $fileSize); close($file); } sub PrintBanner { printf ("pget version %s\n", $_version); printf ("There is absolutely NO WARRANTY for pget.\n"); printf ("Use at your own risk. You have been warned.\n\n"); } sub PrimaryValidations { unless( -e "$_curl") { printf("ERROR:curl is not at %s. Pls install or provide correct path.\n", $_curl); exit; } unless( -e "$_extFile") { printf("extFile is not at %s. Creating one\n", $_extFile); `touch $_extFile`; } if ( $_chunkSize <= 0) { printf ("Invalid chunk size. Using 1Mb as default.\n"); $_chunkSize = 1024*1024; } }

    Read the article

  • SSH login very slow on OS X Leopard

    - by acjohnson55
    My SSH sessions take a very long time to initiate. This applies for logins with and without passwords, interactive and non-interactive. I have tried setting 'GSSAPIAuthentication no' and 'IPQoS 0x00' on the client side, and 'UseDNS no' on the server side, but no dice. I'm really stumped and frustrated. The worst part is that it SFTP takes forever to establish connections too, making file transfer much longer than it would be otherwise. I thought the problem might be something with PAM, because of where the hang is in the sshd log below, so I tried commenting out each line one-by-one in the /etc/pam.d/sshd file. Some caused login to be impossible, some had no apparent effect. I can't really tell if PAM is stalling for other services, but I can say that su'ing into my account from another account with 'su -l' has no apparent delay. I tried creating a new user account, just to see if there was something wrong with my existing account, and the same problem persisted. Any ideas of what's going on? On the client side, the most verbose mode outputs (redacted where reasonable): OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data ... debug1: ... line 1: Applying options for ... debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ... [x.x.x.x] port 22. debug1: Connection established. debug1: identity file /.../.ssh/id_rsa type -1 debug1: identity file /.../.ssh/id_rsa-cert type -1 debug3: Incorrect RSA1 identifier debug3: Could not load "/.../.ssh/id_dsa" as a RSA1 public key debug1: identity file /.../.ssh/id_dsa type 2 debug1: identity file /.../.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.2 debug1: match: OpenSSH_5.2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "..." from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 136/256 debug2: bits set: 523/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ... debug3: load_hostkeys: loading entries for host "..." from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "x.x.x.x" from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug1: Host '...' is known and matches the RSA host key. debug1: Found key in /.../.ssh/known_hosts:9 debug2: bits set: 492/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /.../.ssh/id_dsa (0x7f8b7b41d6c0) debug2: key: /.../.ssh/id_rsa (0x0) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering DSA public key: /.../.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-dss blen 434 debug2: input_userauth_pk_ok: fp ... debug3: sign_and_send_pubkey: DSA ... debug1: Authentication succeeded (publickey). Authenticated to ... ([x.x.x.x]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. ****** Hangs here ****** debug2: callback start debug2: client_session2_setup: id 0 debug2: fd 3 setting TCP_NODELAY debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env TERM_PROGRAM debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env TMPDIR debug3: Ignored env Apple_PubSub_Socket_Render debug3: Ignored env TERM_PROGRAM_VERSION debug3: Ignored env TERM_SESSION_ID debug3: Ignored env USER debug3: Ignored env COMMAND_MODE debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env Apple_Ubiquity_Message debug3: Ignored env __CF_USER_TEXT_ENCODING debug3: Ignored env PATH debug3: Ignored env MKL_NUM_THREADS debug3: Ignored env PWD debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env DYLD_LIBRARY_PATH debug3: Ignored env PYTHONPATH debug3: Ignored env LOGNAME debug3: Ignored env DISPLAY debug3: Ignored env SECURITYSESSIONID debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 On the server side, the debug output looks like: Sep 16 18:46:40 ... sshd[31435]: debug1: inetd sockets after dupping: 3, 4 Sep 16 18:46:40 ... sshd[31435]: Connection from x.x.x.x port 52758 Sep 16 18:46:40 ... sshd[31435]: debug1: Current Session ID is 56AC0FB0 / Session Attributes are 00008000 Sep 16 18:46:40 ... sshd[31435]: debug1: Running in inetd mode in a non-root session... assuming inetd created the session for us. Sep 16 18:46:40 ... sshd[31435]: debug1: Client protocol version 2.0; client software version OpenSSH_5.9 Sep 16 18:46:40 ... sshd[31435]: debug1: match: OpenSSH_5.9 pat OpenSSH* Sep 16 18:46:40 ... sshd[31435]: debug1: Enabling compatibility mode for protocol 2.0 Sep 16 18:46:40 ... sshd[31435]: debug1: Local version string SSH-2.0-OpenSSH_5.2 Sep 16 18:46:40 ... sshd[31435]: debug1: Checking with Service ACLs for ssh login restrictions Sep 16 18:46:40 ... sshd[31435]: debug1: call to mbr_user_name_to_uuid with <...> suceeded to retrieve user_uuid Sep 16 18:46:40 ... sshd[31435]: debug1: Call to mbr_check_service_membership failed with status <0> Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: initializing for "..." Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: setting PAM_RHOST to "x.x.x.x" Sep 16 18:46:40 ... sshd[31435]: Failed none for ... from x.x.x.x port 52758 ssh2 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys2 Sep 16 18:46:40 ... sshd[31435]: debug1: fd 5 clearing O_NONBLOCK Sep 16 18:46:40 ... sshd[31435]: debug1: matching key found: file /.../.ssh/authorized_keys2, line 1 Sep 16 18:46:40 ... sshd[31435]: Found matching DSA key: ... Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys2 Sep 16 18:46:40 ... sshd[31435]: debug1: fd 5 clearing O_NONBLOCK Sep 16 18:46:40 ... sshd[31435]: debug1: matching key found: file /.../.ssh/authorized_keys2, line 1 Sep 16 18:46:40 ... sshd[31435]: Found matching DSA key: ... Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: ssh_dss_verify: signature correct Sep 16 18:46:40 ... sshd[31435]: debug1: do_pam_account: called Sep 16 18:46:40 ... sshd[31435]: Accepted publickey for ... from x.x.x.x port 52758 ssh2 Sep 16 18:46:40 ... sshd[31435]: debug1: monitor_child_preauth: ... has been authenticated by privileged process Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: establishing credentials ***** Hangs here ***** Sep 16 18:46:54 ... sshd[31435]: User child is on pid 31654 Sep 16 18:46:54 ... sshd[31654]: debug1: PAM: establishing credentials Sep 16 18:46:54 ... sshd[31654]: debug1: permanently_set_uid: 509/20 Sep 16 18:46:54 ... sshd[31654]: debug1: Entering interactive session for SSH2. Sep 16 18:46:54 ... sshd[31654]: debug1: server_init_dispatch_20 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 Sep 16 18:46:54 ... sshd[31654]: debug1: input_session_request Sep 16 18:46:54 ... sshd[31654]: debug1: channel 0: new [server-session] Sep 16 18:46:54 ... sshd[31654]: debug1: session_new: session 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_open: channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_open: session 0: link with channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_open: confirm session Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_global_request: rtype [email protected] want_reply 0 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request pty-req reply 1 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req pty-req Sep 16 18:46:54 ... sshd[31654]: debug1: Allocating pty. Sep 16 18:46:54 ... sshd[31435]: debug1: session_new: session 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_pty_req: session 0 alloc /dev/ttys008 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request env reply 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req env Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request shell reply 1 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req shell Sep 16 18:46:54 ... sshd[31655]: debug1: Setting controlling tty using TIOCSCTTY.

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • How to deploy the advanced search page using Module in SharePoint 2013

    - by ybbest
    Today, I’d like to show you how to deploy your custom advanced search page using module in Visual Studio 2012.Using a module is the way how SharePoint deploy all the publishing pages to the search centre. Browse to the template under 15 hive of SharePoint2013, then go to the SearchCenterFiles under Features(as shown below).Then open the Files.xml it shows how SharePoint using module to deploy advanced search.You can download the solution here. Now I am going to show you how to deploy your custom advanced search page.The feature is located  in the C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\FEATURES\SearchCenterFiles . To deploy SharePoint advanced Search pages, you need to do the following: 1. Create SharePoint2013 project and then create a module item. 2. Find how Out of box SharePoint deploy the Advanced Search Page from Files.xml and copy and paste it into the elements.xml <File Url="advanced.aspx" Type="GhostableInLibrary"> <Property Name="PublishingPageLayout" Value="~SiteCollection/_catalogs/masterpage/AdvancedSearchLayout.aspx, $Resources:Microsoft.Office.Server.Search,SearchCenterAdvancedSearchTitle;" /> <Property Name="Title" Value="$Resources:Microsoft.Office.Server.Search,Search_Advanced_Page_Title;" /> <Property Name="ContentType" Value="$Resources:Microsoft.Office.Server.Search,contenttype_welcomepage_name;" /> <AllUsersWebPart WebPartZoneID="MainZone" WebPartOrder="1"> <![CDATA[ <WebPart xmlns="http://schemas.microsoft.com/WebPart/v2"> <Assembly>Microsoft.Office.Server.Search, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Assembly> <TypeName>Microsoft.Office.Server.Search.WebControls.AdvancedSearchBox</TypeName> <Title>$Resources:Microsoft.Office.Server.Search,AdvancedSearch_Webpart_Title;</Title> <Description>$Resources:Microsoft.Office.Server.Search,AdvancedSearch_Webpart_Description;</Description> <FrameType>None</FrameType> <AllowMinimize>true</AllowMinimize> <AllowRemove>true</AllowRemove> <IsVisible>true</IsVisible> <SearchResultPageURL xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">results.aspx</SearchResultPageURL> <TextQuerySectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_FindDocsWith_Title;</TextQuerySectionLabelText> <ShowAndQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowAndQueryTextBox> <ShowPhraseQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPhraseQueryTextBox> <ShowOrQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowOrQueryTextBox> <ShowNotQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowNotQueryTextBox> <ScopeSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_NarrowSearch_Title;</ScopeSectionLabelText> <ShowLanguageOptions xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowLanguageOptions> <ShowResultTypePicker xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowResultTypePicker> <ShowPropertiesSection xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPropertiesSection> <PropertiesSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_AddPropRestrictions_Title;</PropertiesSectionLabelText> </WebPart> ]]> </AllUsersWebPart> </File> 3. Customize your SharePoint advanced Search Page by modifying the Advanced Search Box and Export the webpart and copy the webpart file to the elements under module. 4. Export the web part and copy the content of the web part file to the elements.xml in the module. <File Path="AdvancedSearchPage\advanced.aspx" Url="employeeAdvanced.aspx" Type="GhostableInLibrary"> <Property Name="PublishingPageLayout" Value="~SiteCollection/_catalogs/masterpage/AdvancedSearchLayout.aspx, $Resources:Microsoft.Office.Server.Search,SearchCenterAdvancedSearchTitle;" /> <Property Name="Title" Value="$Resources:Microsoft.Office.Server.Search,Search_Advanced_Page_Title;" /> <Property Name="ContentType" Value="$Resources:Microsoft.Office.Server.Search,contenttype_welcomepage_name;" /> <AllUsersWebPart WebPartZoneID="MainZone" WebPartOrder="1"> <![CDATA[ <WebPart xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/WebPart/v2"> <Title>Advanced Search Box</Title> <FrameType>None</FrameType> <Description>Displays parameterized search options based on properties and combinations of words.</Description> <IsIncluded>true</IsIncluded> <ZoneID>MainZone</ZoneID> <PartOrder>1</PartOrder> <FrameState>Normal</FrameState> <Height /> <Width /> <AllowRemove>true</AllowRemove> <AllowZoneChange>true</AllowZoneChange> <AllowMinimize>true</AllowMinimize> <AllowConnect>true</AllowConnect> <AllowEdit>true</AllowEdit> <AllowHide>true</AllowHide> <IsVisible>true</IsVisible> <DetailLink /> <HelpLink /> <HelpMode>Modeless</HelpMode> <Dir>Default</Dir> <PartImageSmall /> <MissingAssembly>Cannot import this Web Part.</MissingAssembly> <PartImageLarge /> <IsIncludedFilter /> <Assembly>Microsoft.Office.Server.Search, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Assembly> <TypeName>Microsoft.Office.Server.Search.WebControls.AdvancedSearchBox</TypeName> <SearchResultPageURL xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">results.aspx</SearchResultPageURL> <TextQuerySectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Find documents that have...</TextQuerySectionLabelText> <ShowAndQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowAndQueryTextBox> <AndQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowPhraseQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPhraseQueryTextBox> <PhraseQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowOrQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowOrQueryTextBox> <OrQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowNotQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowNotQueryTextBox> <NotQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ScopeSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Narrow the search...</ScopeSectionLabelText> <ShowScopes xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">false</ShowScopes> <ScopeLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <DisplayGroup xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Advanced Search</DisplayGroup> <ShowLanguageOptions xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">false</ShowLanguageOptions> <LanguagesLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowResultTypePicker xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowResultTypePicker> <ResultTypeLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowPropertiesSection xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPropertiesSection> <PropertiesSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Add property restrictions...</PropertiesSectionLabelText> <Properties xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">&lt;root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"&gt;  &lt;LangDefs&gt;    &lt;LangDef DisplayName="Arabic" LangID="ar"/&gt;    &lt;LangDef DisplayName="Bengali" LangID="bn"/&gt;    &lt;LangDef DisplayName="Bulgarian" LangID="bg"/&gt;    &lt;LangDef DisplayName="Catalan" LangID="ca"/&gt;    &lt;LangDef DisplayName="Simplified Chinese" LangID="zh-cn"/&gt;    &lt;LangDef DisplayName="Traditional Chinese" LangID="zh-tw"/&gt;    &lt;LangDef DisplayName="Croatian" LangID="hr"/&gt;    &lt;LangDef DisplayName="Czech" LangID="cs"/&gt;    &lt;LangDef DisplayName="Danish" LangID="da"/&gt;    &lt;LangDef DisplayName="Dutch" LangID="nl"/&gt;    &lt;LangDef DisplayName="English" LangID="en"/&gt;    &lt;LangDef DisplayName="Finnish" LangID="fi"/&gt;    &lt;LangDef DisplayName="French" LangID="fr"/&gt;    &lt;LangDef DisplayName="German" LangID="de"/&gt;    &lt;LangDef DisplayName="Greek" LangID="el"/&gt;    &lt;LangDef DisplayName="Gujarati" LangID="gu"/&gt;    &lt;LangDef DisplayName="Hebrew" LangID="he"/&gt;    &lt;LangDef DisplayName="Hindi" LangID="hi"/&gt;    &lt;LangDef DisplayName="Hungarian" LangID="hu"/&gt;    &lt;LangDef DisplayName="Icelandic" LangID="is"/&gt;    &lt;LangDef DisplayName="Indonesian" LangID="id"/&gt;    &lt;LangDef DisplayName="Italian" LangID="it"/&gt;    &lt;LangDef DisplayName="Japanese" LangID="ja"/&gt;    &lt;LangDef DisplayName="Kannada" LangID="kn"/&gt;    &lt;LangDef DisplayName="Korean" LangID="ko"/&gt;    &lt;LangDef DisplayName="Latvian" LangID="lv"/&gt;    &lt;LangDef DisplayName="Lithuanian" LangID="lt"/&gt;    &lt;LangDef DisplayName="Malay" LangID="ms"/&gt;    &lt;LangDef DisplayName="Malayalam" LangID="ml"/&gt;    &lt;LangDef DisplayName="Marathi" LangID="mr"/&gt;    &lt;LangDef DisplayName="Norwegian" LangID="no"/&gt;    &lt;LangDef DisplayName="Polish" LangID="pl"/&gt;    &lt;LangDef DisplayName="Portuguese" LangID="pt"/&gt;    &lt;LangDef DisplayName="Punjabi" LangID="pa"/&gt;    &lt;LangDef DisplayName="Romanian" LangID="ro"/&gt;    &lt;LangDef DisplayName="Russian" LangID="ru"/&gt;    &lt;LangDef DisplayName="Slovak" LangID="sk"/&gt;    &lt;LangDef DisplayName="Slovenian" LangID="sl"/&gt;    &lt;LangDef DisplayName="Spanish" LangID="es"/&gt;    &lt;LangDef DisplayName="Swedish" LangID="sv"/&gt;    &lt;LangDef DisplayName="Tamil" LangID="ta"/&gt;    &lt;LangDef DisplayName="Telugu" LangID="te"/&gt;    &lt;LangDef DisplayName="Thai" LangID="th"/&gt;    &lt;LangDef DisplayName="Turkish" LangID="tr"/&gt;    &lt;LangDef DisplayName="Ukrainian" LangID="uk"/&gt;    &lt;LangDef DisplayName="Urdu" LangID="ur"/&gt;    &lt;LangDef DisplayName="Vietnamese" LangID="vi"/&gt;  &lt;/LangDefs&gt;  &lt;Languages&gt;    &lt;Language LangRef="en"/&gt;    &lt;Language LangRef="fr"/&gt;    &lt;Language LangRef="de"/&gt;    &lt;Language LangRef="ja"/&gt;    &lt;Language LangRef="zh-cn"/&gt;    &lt;Language LangRef="es"/&gt;    &lt;Language LangRef="zh-tw"/&gt;  &lt;/Languages&gt;  &lt;PropertyDefs&gt;    &lt;PropertyDef Name="Path" DataType="url" DisplayName="URL"/&gt;    &lt;PropertyDef Name="Size" DataType="integer" DisplayName="Size (bytes)"/&gt;    &lt;PropertyDef Name="Write" DataType="datetime" DisplayName="Last Modified Date"/&gt;    &lt;PropertyDef Name="FileName" DataType="text" DisplayName="Name"/&gt;    &lt;PropertyDef Name="Description" DataType="text" DisplayName="Description"/&gt;    &lt;PropertyDef Name="Title" DataType="text" DisplayName="Title"/&gt;    &lt;PropertyDef Name="Author" DataType="text" DisplayName="Author"/&gt;    &lt;PropertyDef Name="DocSubject" DataType="text" DisplayName="Subject"/&gt;    &lt;PropertyDef Name="DocKeywords" DataType="text" DisplayName="Keywords"/&gt;    &lt;PropertyDef Name="DocComments" DataType="text" DisplayName="Comments"/&gt;    &lt;PropertyDef Name="CreatedBy" DataType="text" DisplayName="Created By"/&gt;    &lt;PropertyDef Name="ModifiedBy" DataType="text" DisplayName="Last Modified By"/&gt;    &lt;PropertyDef Name="EmployeeNumber" DataType="text" DisplayName="EmployeeNumber"/&gt;    &lt;PropertyDef Name="EmployeeId" DataType="text" DisplayName="EmployeeId"/&gt;    &lt;PropertyDef Name="EmployeeFirstName" DataType="text" DisplayName="EmployeeFirstName"/&gt;    &lt;PropertyDef Name="EmployeeLastName" DataType="text" DisplayName="EmployeeLastName"/&gt;  &lt;/PropertyDefs&gt;  &lt;ResultTypes&gt;    &lt;ResultType DisplayName="Employee Document" Name="default"&gt;      &lt;KeywordQuery/&gt;      &lt;PropertyRef Name="EmployeeNumber" /&gt;      &lt;PropertyRef Name="EmployeeId" /&gt;      &lt;PropertyRef Name="EmployeeFirstName" /&gt;      &lt;PropertyRef Name="EmployeeLastName" /&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="All Results"&gt;      &lt;KeywordQuery/&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Documents" Name="documents"&gt;      &lt;KeywordQuery&gt;IsDocument="True"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Word Documents" Name="worddocuments"&gt;      &lt;KeywordQuery&gt;FileExtension="doc" OR FileExtension="docx" OR FileExtension="dot" OR FileExtension="docm" OR FileExtension="odt"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Excel Documents" Name="exceldocuments"&gt;      &lt;KeywordQuery&gt;FileExtension="xls" OR FileExtension="xlsx" OR FileExtension="xlsm" OR FileExtension="xlsb" OR FileExtension="ods"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="PowerPoint Presentations" Name="presentations"&gt;      &lt;KeywordQuery&gt;FileExtension="ppt" OR FileExtension="pptx" OR FileExtension="pptm" OR FileExtension="odp"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;  &lt;/ResultTypes&gt;&lt;/root&gt;</Properties> </WebPart> ]]> </AllUsersWebPart> </File> 5.Deploy your custom solution and you will have a custom advanced search page.

    Read the article

  • How to deploy the advanced search page using Module in SharePoint 2013

    - by ybbest
    Today, I’d like to show you how to deploy your custom advanced search page using module in Visual Studio 2012.Using a module is the way how SharePoint deploy all the publishing pages to the search centre. Browse to the template under 15 hive of SharePoint2013, then go to the SearchCenterFiles under Features(as shown below).Then open the Files.xml it shows how SharePoint using module to deploy advanced search.You can download the solution here. Now I am going to show you how to deploy your custom advanced search page.The feature is located  in the C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\FEATURES\SearchCenterFiles . To deploy SharePoint advanced Search pages, you need to do the following: 1. Create SharePoint2013 project and then create a module item. 2. Find how Out of box SharePoint deploy the Advanced Search Page from Files.xml and copy and paste it into the elements.xml <File Url="advanced.aspx" Type="GhostableInLibrary"> <Property Name="PublishingPageLayout" Value="~SiteCollection/_catalogs/masterpage/AdvancedSearchLayout.aspx, $Resources:Microsoft.Office.Server.Search,SearchCenterAdvancedSearchTitle;" /> <Property Name="Title" Value="$Resources:Microsoft.Office.Server.Search,Search_Advanced_Page_Title;" /> <Property Name="ContentType" Value="$Resources:Microsoft.Office.Server.Search,contenttype_welcomepage_name;" /> <AllUsersWebPart WebPartZoneID="MainZone" WebPartOrder="1"> <![CDATA[ <WebPart xmlns="http://schemas.microsoft.com/WebPart/v2"> <Assembly>Microsoft.Office.Server.Search, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Assembly> <TypeName>Microsoft.Office.Server.Search.WebControls.AdvancedSearchBox</TypeName> <Title>$Resources:Microsoft.Office.Server.Search,AdvancedSearch_Webpart_Title;</Title> <Description>$Resources:Microsoft.Office.Server.Search,AdvancedSearch_Webpart_Description;</Description> <FrameType>None</FrameType> <AllowMinimize>true</AllowMinimize> <AllowRemove>true</AllowRemove> <IsVisible>true</IsVisible> <SearchResultPageURL xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">results.aspx</SearchResultPageURL> <TextQuerySectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_FindDocsWith_Title;</TextQuerySectionLabelText> <ShowAndQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowAndQueryTextBox> <ShowPhraseQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPhraseQueryTextBox> <ShowOrQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowOrQueryTextBox> <ShowNotQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowNotQueryTextBox> <ScopeSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_NarrowSearch_Title;</ScopeSectionLabelText> <ShowLanguageOptions xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowLanguageOptions> <ShowResultTypePicker xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowResultTypePicker> <ShowPropertiesSection xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPropertiesSection> <PropertiesSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_AddPropRestrictions_Title;</PropertiesSectionLabelText> </WebPart> ]]> </AllUsersWebPart> </File> 3. Customize your SharePoint advanced Search Page by modifying the Advanced Search Box and Export the webpart and copy the webpart file to the elements under module. 4. Export the web part and copy the content of the web part file to the elements.xml in the module. <File Path="AdvancedSearchPage\advanced.aspx" Url="employeeAdvanced.aspx" Type="GhostableInLibrary"> <Property Name="PublishingPageLayout" Value="~SiteCollection/_catalogs/masterpage/AdvancedSearchLayout.aspx, $Resources:Microsoft.Office.Server.Search,SearchCenterAdvancedSearchTitle;" /> <Property Name="Title" Value="$Resources:Microsoft.Office.Server.Search,Search_Advanced_Page_Title;" /> <Property Name="ContentType" Value="$Resources:Microsoft.Office.Server.Search,contenttype_welcomepage_name;" /> <AllUsersWebPart WebPartZoneID="MainZone" WebPartOrder="1"> <![CDATA[ <WebPart xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/WebPart/v2"> <Title>Advanced Search Box</Title> <FrameType>None</FrameType> <Description>Displays parameterized search options based on properties and combinations of words.</Description> <IsIncluded>true</IsIncluded> <ZoneID>MainZone</ZoneID> <PartOrder>1</PartOrder> <FrameState>Normal</FrameState> <Height /> <Width /> <AllowRemove>true</AllowRemove> <AllowZoneChange>true</AllowZoneChange> <AllowMinimize>true</AllowMinimize> <AllowConnect>true</AllowConnect> <AllowEdit>true</AllowEdit> <AllowHide>true</AllowHide> <IsVisible>true</IsVisible> <DetailLink /> <HelpLink /> <HelpMode>Modeless</HelpMode> <Dir>Default</Dir> <PartImageSmall /> <MissingAssembly>Cannot import this Web Part.</MissingAssembly> <PartImageLarge /> <IsIncludedFilter /> <Assembly>Microsoft.Office.Server.Search, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Assembly> <TypeName>Microsoft.Office.Server.Search.WebControls.AdvancedSearchBox</TypeName> <SearchResultPageURL xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">results.aspx</SearchResultPageURL> <TextQuerySectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Find documents that have...</TextQuerySectionLabelText> <ShowAndQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowAndQueryTextBox> <AndQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowPhraseQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPhraseQueryTextBox> <PhraseQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowOrQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowOrQueryTextBox> <OrQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowNotQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowNotQueryTextBox> <NotQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ScopeSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Narrow the search...</ScopeSectionLabelText> <ShowScopes xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">false</ShowScopes> <ScopeLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <DisplayGroup xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Advanced Search</DisplayGroup> <ShowLanguageOptions xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">false</ShowLanguageOptions> <LanguagesLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowResultTypePicker xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowResultTypePicker> <ResultTypeLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowPropertiesSection xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPropertiesSection> <PropertiesSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Add property restrictions...</PropertiesSectionLabelText> <Properties xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">&lt;root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"&gt;  &lt;LangDefs&gt;    &lt;LangDef DisplayName="Arabic" LangID="ar"/&gt;    &lt;LangDef DisplayName="Bengali" LangID="bn"/&gt;    &lt;LangDef DisplayName="Bulgarian" LangID="bg"/&gt;    &lt;LangDef DisplayName="Catalan" LangID="ca"/&gt;    &lt;LangDef DisplayName="Simplified Chinese" LangID="zh-cn"/&gt;    &lt;LangDef DisplayName="Traditional Chinese" LangID="zh-tw"/&gt;    &lt;LangDef DisplayName="Croatian" LangID="hr"/&gt;    &lt;LangDef DisplayName="Czech" LangID="cs"/&gt;    &lt;LangDef DisplayName="Danish" LangID="da"/&gt;    &lt;LangDef DisplayName="Dutch" LangID="nl"/&gt;    &lt;LangDef DisplayName="English" LangID="en"/&gt;    &lt;LangDef DisplayName="Finnish" LangID="fi"/&gt;    &lt;LangDef DisplayName="French" LangID="fr"/&gt;    &lt;LangDef DisplayName="German" LangID="de"/&gt;    &lt;LangDef DisplayName="Greek" LangID="el"/&gt;    &lt;LangDef DisplayName="Gujarati" LangID="gu"/&gt;    &lt;LangDef DisplayName="Hebrew" LangID="he"/&gt;    &lt;LangDef DisplayName="Hindi" LangID="hi"/&gt;    &lt;LangDef DisplayName="Hungarian" LangID="hu"/&gt;    &lt;LangDef DisplayName="Icelandic" LangID="is"/&gt;    &lt;LangDef DisplayName="Indonesian" LangID="id"/&gt;    &lt;LangDef DisplayName="Italian" LangID="it"/&gt;    &lt;LangDef DisplayName="Japanese" LangID="ja"/&gt;    &lt;LangDef DisplayName="Kannada" LangID="kn"/&gt;    &lt;LangDef DisplayName="Korean" LangID="ko"/&gt;    &lt;LangDef DisplayName="Latvian" LangID="lv"/&gt;    &lt;LangDef DisplayName="Lithuanian" LangID="lt"/&gt;    &lt;LangDef DisplayName="Malay" LangID="ms"/&gt;    &lt;LangDef DisplayName="Malayalam" LangID="ml"/&gt;    &lt;LangDef DisplayName="Marathi" LangID="mr"/&gt;    &lt;LangDef DisplayName="Norwegian" LangID="no"/&gt;    &lt;LangDef DisplayName="Polish" LangID="pl"/&gt;    &lt;LangDef DisplayName="Portuguese" LangID="pt"/&gt;    &lt;LangDef DisplayName="Punjabi" LangID="pa"/&gt;    &lt;LangDef DisplayName="Romanian" LangID="ro"/&gt;    &lt;LangDef DisplayName="Russian" LangID="ru"/&gt;    &lt;LangDef DisplayName="Slovak" LangID="sk"/&gt;    &lt;LangDef DisplayName="Slovenian" LangID="sl"/&gt;    &lt;LangDef DisplayName="Spanish" LangID="es"/&gt;    &lt;LangDef DisplayName="Swedish" LangID="sv"/&gt;    &lt;LangDef DisplayName="Tamil" LangID="ta"/&gt;    &lt;LangDef DisplayName="Telugu" LangID="te"/&gt;    &lt;LangDef DisplayName="Thai" LangID="th"/&gt;    &lt;LangDef DisplayName="Turkish" LangID="tr"/&gt;    &lt;LangDef DisplayName="Ukrainian" LangID="uk"/&gt;    &lt;LangDef DisplayName="Urdu" LangID="ur"/&gt;    &lt;LangDef DisplayName="Vietnamese" LangID="vi"/&gt;  &lt;/LangDefs&gt;  &lt;Languages&gt;    &lt;Language LangRef="en"/&gt;    &lt;Language LangRef="fr"/&gt;    &lt;Language LangRef="de"/&gt;    &lt;Language LangRef="ja"/&gt;    &lt;Language LangRef="zh-cn"/&gt;    &lt;Language LangRef="es"/&gt;    &lt;Language LangRef="zh-tw"/&gt;  &lt;/Languages&gt;  &lt;PropertyDefs&gt;    &lt;PropertyDef Name="Path" DataType="url" DisplayName="URL"/&gt;    &lt;PropertyDef Name="Size" DataType="integer" DisplayName="Size (bytes)"/&gt;    &lt;PropertyDef Name="Write" DataType="datetime" DisplayName="Last Modified Date"/&gt;    &lt;PropertyDef Name="FileName" DataType="text" DisplayName="Name"/&gt;    &lt;PropertyDef Name="Description" DataType="text" DisplayName="Description"/&gt;    &lt;PropertyDef Name="Title" DataType="text" DisplayName="Title"/&gt;    &lt;PropertyDef Name="Author" DataType="text" DisplayName="Author"/&gt;    &lt;PropertyDef Name="DocSubject" DataType="text" DisplayName="Subject"/&gt;    &lt;PropertyDef Name="DocKeywords" DataType="text" DisplayName="Keywords"/&gt;    &lt;PropertyDef Name="DocComments" DataType="text" DisplayName="Comments"/&gt;    &lt;PropertyDef Name="CreatedBy" DataType="text" DisplayName="Created By"/&gt;    &lt;PropertyDef Name="ModifiedBy" DataType="text" DisplayName="Last Modified By"/&gt;    &lt;PropertyDef Name="EmployeeNumber" DataType="text" DisplayName="EmployeeNumber"/&gt;    &lt;PropertyDef Name="EmployeeId" DataType="text" DisplayName="EmployeeId"/&gt;    &lt;PropertyDef Name="EmployeeFirstName" DataType="text" DisplayName="EmployeeFirstName"/&gt;    &lt;PropertyDef Name="EmployeeLastName" DataType="text" DisplayName="EmployeeLastName"/&gt;  &lt;/PropertyDefs&gt;  &lt;ResultTypes&gt;    &lt;ResultType DisplayName="Employee Document" Name="default"&gt;      &lt;KeywordQuery/&gt;      &lt;PropertyRef Name="EmployeeNumber" /&gt;      &lt;PropertyRef Name="EmployeeId" /&gt;      &lt;PropertyRef Name="EmployeeFirstName" /&gt;      &lt;PropertyRef Name="EmployeeLastName" /&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="All Results"&gt;      &lt;KeywordQuery/&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Documents" Name="documents"&gt;      &lt;KeywordQuery&gt;IsDocument="True"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Word Documents" Name="worddocuments"&gt;      &lt;KeywordQuery&gt;FileExtension="doc" OR FileExtension="docx" OR FileExtension="dot" OR FileExtension="docm" OR FileExtension="odt"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Excel Documents" Name="exceldocuments"&gt;      &lt;KeywordQuery&gt;FileExtension="xls" OR FileExtension="xlsx" OR FileExtension="xlsm" OR FileExtension="xlsb" OR FileExtension="ods"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="PowerPoint Presentations" Name="presentations"&gt;      &lt;KeywordQuery&gt;FileExtension="ppt" OR FileExtension="pptx" OR FileExtension="pptm" OR FileExtension="odp"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;  &lt;/ResultTypes&gt;&lt;/root&gt;</Properties> </WebPart> ]]> </AllUsersWebPart> </File> 5.Deploy your custom solution and you will have a custom advanced search page.

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Blackberry Player, custom data source

    - by Alex
    Hello I must create a custom media player within the application with support for mp3 and wav files. I read in the documentation i cant seek or get the media file duration without a custom datasoruce. I checked the demo in the JDE 4.6 but i have still problems... I cant get the duration, it return much more then the expected so i`m sure i screwed up something while i modified the code to read the mp3 file locally from the filesystem. Somebody can help me what i did wrong ? (I can hear the mp3, so the player plays it correctly from start to end) I must support OSs = 4.6. Thank You Here is my modified datasource LimitedRateStreaminSource.java * Copyright © 1998-2009 Research In Motion Ltd. Note: For the sake of simplicity, this sample application may not leverage resource bundles and resource strings. However, it is STRONGLY recommended that application developers make use of the localization features available within the BlackBerry development platform to ensure a seamless application experience across a variety of languages and geographies. For more information on localizing your application, please refer to the BlackBerry Java Development Environment Development Guide associated with this release. */ package com.halcyon.tawkwidget.model; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import javax.microedition.io.Connector; import javax.microedition.io.file.FileConnection; import javax.microedition.media.Control; import javax.microedition.media.protocol.ContentDescriptor; import javax.microedition.media.protocol.DataSource; import javax.microedition.media.protocol.SourceStream; import net.rim.device.api.io.SharedInputStream; /** * The data source used by the BufferedPlayback's media player. / public final class LimitedRateStreamingSource extends DataSource { /* The max size to be read from the stream at one time. */ private static final int READ_CHUNK = 512; // bytes /** A reference to the field which displays the load status. */ //private TextField _loadStatusField; /** A reference to the field which displays the player status. */ //private TextField _playStatusField; /** * The minimum number of bytes that must be buffered before the media file * will begin playing. */ private int _startBuffer = 200000; /** The maximum size (in bytes) of a single read. */ private int _readLimit = 32000; /** * The minimum forward byte buffer which must be maintained in order for * the video to keep playing. If the forward buffer falls below this * number, the playback will pause until the buffer increases. */ private int _pauseBytes = 64000; /** * The minimum forward byte buffer required to resume * playback after a pause. */ private int _resumeBytes = 128000; /** The stream connection over which media content is passed. */ //private ContentConnection _contentConnection; private FileConnection _fileConnection; /** An input stream shared between several readers. */ private SharedInputStream _readAhead; /** A stream to the buffered resource. */ private LimitedRateSourceStream _feedToPlayer; /** The MIME type of the remote media file. */ private String _forcedContentType; /** A counter for the total number of buffered bytes */ private volatile int _totalRead; /** A flag used to tell the connection thread to stop */ private volatile boolean _stop; /** * A flag used to indicate that the initial buffering is complete. In * other words, that the current buffer is larger than the defined start * buffer size. */ private volatile boolean _bufferingComplete; /** A flag used to indicate that the remote file download is complete. */ private volatile boolean _downloadComplete; /** The thread which retrieves the remote media file. */ private ConnectionThread _loaderThread; /** The local save file into which the remote file is written. */ private FileConnection _saveFile; /** A stream for the local save file. */ private OutputStream _saveStream; /** * Constructor. * @param locator The locator that describes the DataSource. */ public LimitedRateStreamingSource(String locator) { super(locator); } /** * Open a connection to the locator. * @throws IOException */ public void connect() throws IOException { //Open the connection to the remote file. _fileConnection = (FileConnection)Connector.open(getLocator(), Connector.READ); //Cache a reference to the locator. String locator = getLocator(); //Report status. System.out.println("Loading: " + locator); //System.out.println("Size: " + _contentConnection.getLength()); System.out.println("Size: " + _fileConnection.totalSize()); //The name of the remote file begins after the last forward slash. int filenameStart = locator.lastIndexOf('/'); //The file name ends at the first instance of a semicolon. int paramStart = locator.indexOf(';'); //If there is no semicolon, the file name ends at the end of the line. if (paramStart < 0) { paramStart = locator.length(); } //Extract the file name. String filename = locator.substring(filenameStart, paramStart); System.out.println("Filename: " + filename); //Open a local save file with the same name as the remote file. _saveFile = (FileConnection) Connector.open("file:///SDCard/blackberry/music" + filename, Connector.READ_WRITE); //If the file doesn't already exist, create it. if (!_saveFile.exists()) { _saveFile.create(); } System.out.println("---------- 1"); //Open the file for writing. _saveFile.setReadable(true); //Open a shared input stream to the local save file to //allow many simultaneous readers. SharedInputStream fileStream = SharedInputStream.getSharedInputStream(_saveFile.openInputStream()); //Begin reading at the beginning of the file. fileStream.setCurrentPosition(0); System.out.println("---------- 2"); //If the local file is smaller than the remote file... if (_saveFile.fileSize() < _fileConnection.totalSize()) { System.out.println("---------- 3"); //Did not get the entire file, set the system to try again. _saveFile.setWritable(true); System.out.println("---------- 4"); //A non-null save stream is used as a flag later to indicate that //the file download was incomplete. _saveStream = _saveFile.openOutputStream(); System.out.println("---------- 5"); //Use a new shared input stream for buffered reading. _readAhead = SharedInputStream.getSharedInputStream(_fileConnection.openInputStream()); System.out.println("---------- 6"); } else { //The download is complete. System.out.println("---------- 7"); _downloadComplete = true; //We can use the initial input stream to read the buffered media. _readAhead = fileStream; System.out.println("---------- 8"); //We can close the remote connection. _fileConnection.close(); System.out.println("---------- 9"); } if (_forcedContentType != null) { //Use the user-defined content type if it is set. System.out.println("---------- 10"); _feedToPlayer = new LimitedRateSourceStream(_readAhead, _forcedContentType); System.out.println("---------- 11"); } else { System.out.println("---------- 12"); //Otherwise, use the MIME types of the remote file. // _feedToPlayer = new LimitedRateSourceStream(_readAhead, _fileConnection)); } System.out.println("---------- 13"); } /** * Destroy and close all existing connections. */ public void disconnect() { try { if (_saveStream != null) { //Destroy the stream to the local save file. _saveStream.close(); _saveStream = null; } //Close the local save file. _saveFile.close(); if (_readAhead != null) { //Close the reader stream. _readAhead.close(); _readAhead = null; } //Close the remote file connection. _fileConnection.close(); //Close the stream to the player. _feedToPlayer.close(); } catch (Exception e) { System.err.println(e.getMessage()); } } /** * Returns the content type of the remote file. * @return The content type of the remote file. */ public String getContentType() { return _feedToPlayer.getContentDescriptor().getContentType(); } /** * Returns a stream to the buffered resource. * @return A stream to the buffered resource. */ public SourceStream[] getStreams() { return new SourceStream[] { _feedToPlayer }; } /** * Starts the connection thread used to download the remote file. */ public void start() throws IOException { //If the save stream is null, we have already completely downloaded //the file. if (_saveStream != null) { //Open the connection thread to finish downloading the file. _loaderThread = new ConnectionThread(); _loaderThread.start(); } } /** * Stop the connection thread. */ public void stop() throws IOException { //Set the boolean flag to stop the thread. _stop = true; } /** * @see javax.microedition.media.Controllable#getControl(String) */ public Control getControl(String controlType) { // No implemented Controls. return null; } /** * @see javax.microedition.media.Controllable#getControls() */ public Control[] getControls() { // No implemented Controls. return null; } /** * Force the lower level stream to a given content type. Must be called * before the connect function in order to work. * @param contentType The content type to use. */ public void setContentType(String contentType) { _forcedContentType = contentType; } /** * A stream to the buffered media resource. */ private final class LimitedRateSourceStream implements SourceStream { /** A stream to the local copy of the remote resource. */ private SharedInputStream _baseSharedStream; /** Describes the content type of the media file. */ private ContentDescriptor _contentDescriptor; /** * Constructor. Creates a LimitedRateSourceStream from * the given InputStream. * @param inputStream The input stream used to create a new reader. * @param contentType The content type of the remote file. */ LimitedRateSourceStream(InputStream inputStream, String contentType) { System.out.println("[LimitedRateSoruceStream]---------- 1"); _baseSharedStream = SharedInputStream.getSharedInputStream(inputStream); System.out.println("[LimitedRateSoruceStream]---------- 2"); _contentDescriptor = new ContentDescriptor(contentType); System.out.println("[LimitedRateSoruceStream]---------- 3"); } /** * Returns the content descriptor for this stream. * @return The content descriptor for this stream. */ public ContentDescriptor getContentDescriptor() { return _contentDescriptor; } /** * Returns the length provided by the connection. * @return long The length provided by the connection. */ public long getContentLength() { return _fileConnection.totalSize(); } /** * Returns the seek type of the stream. */ public int getSeekType() { return RANDOM_ACCESSIBLE; //return SEEKABLE_TO_START; } /** * Returns the maximum size (in bytes) of a single read. */ public int getTransferSize() { return _readLimit; } /** * Writes bytes from the buffer into a byte array for playback. * @param bytes The buffer into which the data is read. * @param off The start offset in array b at which the data is written. * @param len The maximum number of bytes to read. * @return the total number of bytes read into the buffer, or -1 if * there is no more data because the end of the stream has been reached. * @throws IOException */ public int read(byte[] bytes, int off, int len) throws IOException { System.out.println("[LimitedRateSoruceStream]---------- 5"); System.out.println("Read Request for: " + len + " bytes"); //Limit bytes read to our readLimit. int readLength = len; System.out.println("[LimitedRateSoruceStream]---------- 6"); if (readLength > getReadLimit()) { readLength = getReadLimit(); } //The number of available byes in the buffer. int available; //A boolean flag indicating that the thread should pause //until the buffer has increased sufficiently. boolean paused = false; System.out.println("[LimitedRateSoruceStream]---------- 7"); for (;;) { available = _baseSharedStream.available(); System.out.println("[LimitedRateSoruceStream]---------- 8"); if (_downloadComplete) { //Ignore all restrictions if downloading is complete. System.out.println("Complete, Reading: " + len + " - Available: " + available); return _baseSharedStream.read(bytes, off, len); } else if(_bufferingComplete) { if (paused && available > getResumeBytes()) { //If the video is paused due to buffering, but the //number of available byes is sufficiently high, //resume playback of the media. System.out.println("Resuming - Available: " + available); paused = false; return _baseSharedStream.read(bytes, off, readLength); } else if(!paused && (available > getPauseBytes() || available > readLength)) { //We have enough information for this media playback. if (available < getPauseBytes()) { //If the buffer is now insufficient, set the //pause flag. paused = true; } System.out.println("Reading: " + readLength + " - Available: " + available); return _baseSharedStream.read(bytes, off, readLength); } else if(!paused) { //Set pause until loaded enough to resume. paused = true; } } else { //We are not ready to start yet, try sleeping to allow the //buffer to increase. try { Thread.sleep(500); } catch (Exception e) { System.err.println(e.getMessage()); } } } } /** * @see javax.microedition.media.protocol.SourceStream#seek(long) */ public long seek(long where) throws IOException { _baseSharedStream.setCurrentPosition((int) where); return _baseSharedStream.getCurrentPosition(); } /** * @see javax.microedition.media.protocol.SourceStream#tell() */ public long tell() { return _baseSharedStream.getCurrentPosition(); } /** * Close the stream. * @throws IOException */ void close() throws IOException { _baseSharedStream.close(); } /** * @see javax.microedition.media.Controllable#getControl(String) */ public Control getControl(String controlType) { // No implemented controls. return null; } /** * @see javax.microedition.media.Controllable#getControls() */ public Control[] getControls() { // No implemented controls. return null; } } /** * A thread which downloads the remote file and writes it to the local file. */ private final class ConnectionThread extends Thread { /** * Download the remote media file, then write it to the local * file. * @see java.lang.Thread#run() */ public void run() { try { byte[] data = new byte[READ_CHUNK]; int len = 0; //Until we reach the end of the file. while (-1 != (len = _readAhead.read(data))) { _totalRead += len; if (!_bufferingComplete && _totalRead > getStartBuffer()) { //We have enough of a buffer to begin playback. _bufferingComplete = true; System.out.println("Initial Buffering Complete"); } if (_stop) { //Stop reading. return; } } System.out.println("Downloading Complete"); System.out.println("Total Read: " + _totalRead); //If the downloaded data is not the same size //as the remote file, something is wrong. if (_totalRead != _fileConnection.totalSize()) { System.err.println("* Unable to Download entire file *"); } _downloadComplete = true; _readAhead.setCurrentPosition(0); //Write downloaded data to the local file. while (-1 != (len = _readAhead.read(data))) { _saveStream.write(data); } } catch (Exception e) { System.err.println(e.toString()); } } } /** * Gets the minimum forward byte buffer which must be maintained in * order for the video to keep playing. * @return The pause byte buffer. */ int getPauseBytes() { return _pauseBytes; } /** * Sets the minimum forward buffer which must be maintained in order * for the video to keep playing. * @param pauseBytes The new pause byte buffer. */ void setPauseBytes(int pauseBytes) { _pauseBytes = pauseBytes; } /** * Gets the maximum size (in bytes) of a single read. * @return The maximum size (in bytes) of a single read. */ int getReadLimit() { return _readLimit; } /** * Sets the maximum size (in bytes) of a single read. * @param readLimit The new maximum size (in bytes) of a single read. */ void setReadLimit(int readLimit) { _readLimit = readLimit; } /** * Gets the minimum forward byte buffer required to resume * playback after a pause. * @return The resume byte buffer. */ int getResumeBytes() { return _resumeBytes; } /** * Sets the minimum forward byte buffer required to resume * playback after a pause. * @param resumeBytes The new resume byte buffer. */ void setResumeBytes(int resumeBytes) { _resumeBytes = resumeBytes; } /** * Gets the minimum number of bytes that must be buffered before the * media file will begin playing. * @return The start byte buffer. */ int getStartBuffer() { return _startBuffer; } /** * Sets the minimum number of bytes that must be buffered before the * media file will begin playing. * @param startBuffer The new start byte buffer. */ void setStartBuffer(int startBuffer) { _startBuffer = startBuffer; } } And in this way i use it: LimitedRateStreamingSource source = new LimitedRateStreamingSource("file:///SDCard/music3.mp3"); source.setContentType("audio/mpeg"); mediaPlayer = javax.microedition.media.Manager.createPlayer(source); mediaPlayer.addPlayerListener(this); mediaPlayer.realize(); mediaPlayer.prefetch(); After start i use mediaPlayer.getDuration it returns lets say around 24:22 (the inbuild media player in the blackberry say the file length is 4:05) I tried to get the duration in the listener and there unfortunatly returned around 64 minutes, so im sure something is not good inside the datasoruce....

    Read the article

< Previous Page | 33 34 35 36 37