Search Results

Search found 28222 results on 1129 pages for 'machine config'.

Page 151/1129 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • "C variable type sizes are machine dependent." Is it really true? signed & unsigned numbers ;

    - by claws
    Hello, I've been told that C types are machine dependent. Today I wanted to verify it. void legacyTypes() { /* character types */ char k_char = 'a'; //Signedness --> signed & unsigned signed char k_char_s = 'a'; unsigned char k_char_u = 'a'; /* integer types */ int k_int = 1; /* Same as "signed int" */ //Signedness --> signed & unsigned signed int k_int_s = -2; unsigned int k_int_u = 3; //Size --> short, _____, long, long long short int k_s_int = 4; long int k_l_int = 5; long long int k_ll_int = 6; /* real number types */ float k_float = 7; double k_double = 8; } I compiled it on a 32-Bit machine using minGW C compiler _legacyTypes: pushl %ebp movl %esp, %ebp subl $48, %esp movb $97, -1(%ebp) # char movb $97, -2(%ebp) # signed char movb $97, -3(%ebp) # unsigned char movl $1, -8(%ebp) # int movl $-2, -12(%ebp)# signed int movl $3, -16(%ebp) # unsigned int movw $4, -18(%ebp) # short int movl $5, -24(%ebp) # long int movl $6, -32(%ebp) # long long int movl $0, -28(%ebp) movl $0x40e00000, %eax movl %eax, -36(%ebp) fldl LC2 fstpl -48(%ebp) leave ret I compiled the same code on 64-Bit processor (Intel Core 2 Duo) on GCC (linux) legacyTypes: .LFB2: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 movq %rsp, %rbp .cfi_offset 6, -16 .cfi_def_cfa_register 6 movb $97, -1(%rbp) # char movb $97, -2(%rbp) # signed char movb $97, -3(%rbp) # unsigned char movl $1, -12(%rbp) # int movl $-2, -16(%rbp)# signed int movl $3, -20(%rbp) # unsigned int movw $4, -6(%rbp) # short int movq $5, -32(%rbp) # long int movq $6, -40(%rbp) # long long int movl $0x40e00000, %eax movl %eax, -24(%rbp) movabsq $4620693217682128896, %rax movq %rax, -48(%rbp) leave ret Observations char, signed char, unsigned char, int, unsigned int, signed int, short int, unsigned short int, signed short int all occupy same no. of bytes on both 32-Bit & 64-Bit Processor. The only change is in long int & long long int both of these occupy 32-bit on 32-bit machine & 64-bit on 64-bit machine. And also the pointers, which take 32-bit on 32-bit CPU & 64-bit on 64-bit CPU. Questions: I cannot say, what the books say is wrong. But I'm missing something here. What exactly does "Variable types are machine dependent mean?" As you can see, There is no difference between instructions for unsigned & signed numbers. Then how come the range of numbers that can be addressed using both is different? I was reading http://stackoverflow.com/questions/2511246/how-to-maintain-fixed-size-of-c-variable-types-over-different-machines I didn't get the purpose of the question or their answers. What maintaining fixed size? They all are the same. I didn't understand how those answers are going to ensure the same size.

    Read the article

  • TS-7800 Hangs on bootup

    - by Reid
    I have a TS-7800, and it typically boots from the SD card inserted in it. When I tried to boot it up today, it hung on the syslog line. I am now having "Read only file system" problems. What has gone wrong? Bootup console: >> Copyright (c) 2008, Technologic Systems >> Booting from SD card... . . . . >> Booting to SD Card... INIT: version 2.86 booting Starting the hotplug events dispatcher: udevd. Synthesizing the initial hotplug events...done. Waiting for /dev to be fully populated...done. mount: can't find / in /etc/fstab or /etc/mtab Cleaning up ifupdown...rm: cannot remove `/etc/network/run/ifstate': Read-only file system Loading kernel modules...done. Checking all file systems... fsck 1.37 (21-Mar-2005) ... done. none on /dev/pts type devpts (rw,gid=5,mode=620) /etc/init.d/rcS: line 39: /tmp/.clean: Read-only file system Setting up networking...done. Setting up IP spoofing protection: rp_filter. Enabling packet forwarding...done. Configuring network interfaces...ifup: failed to open statefile /etc/network/run/ifstate: Read-only file system done. Starting portmap daemon: portmap. /etc/init.d/rcS: line 39: /tmp/.clean: Read-only file system /etc/init.d/rcS: line 24: /var/run/utmp: Read-only file system rm: cannot remove `/var/lib/urandom/random-seed': Read-only file system urandom start: failed. Recovering nvi editor sessions... done. INIT: Entering runlevel: 3 Starting system log daemon: syslogd . Starting kernel log daemon: klogd. Starting MTA: open: Read-only file system touch: cannot touch `/var/lib/exim4/config.autogenerated.tmp': Read-only file system chown: cannot access `/var/lib/exim4/config.autogenerated.tmp': No such file or directory chmod: cannot access `/var/lib/exim4/config.autogenerated.tmp': No such file or directory chmod: changing permissions of `/var/lib/exim4/config.autogenerated': Read-only file system /usr/sbin/update-exim4.conf: line 260: cannot create temp file for here document: Read-only file system /usr/sbin/update-exim4.conf: line 387: /var/lib/exim4/config.autogenerated.tmp: Read-only file system 2002-01-01 01:31:36 Cannot open main log file "/var/log/exim4/mainlog": Read-only file system: euid=0 egid=0 2002-01-01 01:31:36 non-existent configuration file(s): /var/lib/exim4/config.autogenerated.tmp 2002-01-01 01:31:36 Cannot open main log file "/var/log/exim4/mainlog": Read-only file system: euid=0 egid=0 exim: could not open panic log - aborting: see message(s) above Invalid new configfile /var/lib/exim4/config.autogenerated.tmp not installing /var/lib/exim4/config.autogenerated.tmp to /var/lib/exim4/config.autogenerated Starting internet superserver: inetd. Starting OpenBSD Secure Shell server: sshd. Starting NFS common utilities: statdStarting periodic command scheduler: cron/usr/sbin/cron: can't open or create /var/run/crond.pid: Read-only file system . Starting web server (apache2)...(30)Read-only file system: apache2: could not open error log file /var/log/apache2/error.log. Unable to open logs failed! Debian GNU/Linux 3.1 ts7800 ttyS0 ts7800 login:

    Read the article

  • Linq-to-sql Compiled Query is returning result from different DataContext

    - by Vladimir Kojic
    Compiled query: public static Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault()); It looks like compiled query is caching Machine object and returning the same object even if query is called from new DataContext (I’m disposing DataContext in the service but I’m getting Machine from previous DataContext). I use POCOs and XML mapping. Revised: It looks like compiled query is returning result from new data context and it is not using the one that I passed in compiled-query. Therefore I can not reuse returned object and link it to another object obtained from datacontext thru non compiled queries. I’m using unit of work pattern : // First Call Using(new DataContext) { Machine from DataContext.Table == machine from cached query } // Do some work // Second Call is failing Using(new DataContext) { Machine from DataContext.Table <> machine from cached query }

    Read the article

  • Forward a call to a webservice to another webservice?

    - by Luhmann
    If I have an https webservice behind a firewall on a machine (A) that I cannot access, but access to a machine on the same network (B), from where I can call the webservice on machine A. What is the best way of talking with the webservice on machine A, from the outside via machine B (that I access via VPN)? I can obviously create a service with a matching interface on machine B, and call the methods on the webservice on machine A, and return the result. But I fear for the overhead. Is there another way? Can i somehow forward the request?

    Read the article

  • HELP: MS Virtual Disk Service to Access Volumes and Discs on Local Machine.

    - by Jibran Ahmed
    Hi, here it is my code through which I am successfully initialize the VDS service and get the Packs but When I call QueryVolumes on IVdsPack Object, I am able to get IEnumVdsObjects but unable to get IUnknown* array through IEnumVdsObject::Next method, it reutrns S_FALSE with IUnkown* = NULL. So this IUnknown* cant be used to QueryInterface for IVdsVolume Below is my code HRESULT hResult; IVdsService* pService = NULL; IVdsServiceLoader *pLoader = NULL; //Launch the VDS Service hResult = CoInitialize(NULL); if( SUCCEEDED(hResult) ) { hResult = CoCreateInstance( CLSID_VdsLoader, NULL, CLSCTX_LOCAL_SERVER, IID_IVdsServiceLoader, (void**) &pLoader ); //if succeeded load VDS on local machine if( SUCCEEDED(hResult) ) pLoader->LoadService(NULL, &pService); //Done with Loader now release VDS Loader interface _SafeRelease(pLoader); if( SUCCEEDED(hResult) ) { hResult = pService->WaitForServiceReady(); if ( SUCCEEDED(hResult) ) { AfxMessageBox(L"VDS Service Loaded"); IEnumVdsObject* pEnumVdsObject = NULL; hResult = pService->QueryProviders(VDS_QUERY_SOFTWARE_PROVIDERS, &pEnumVdsObject); IUnknown* ppObjUnk ; IVdsSwProvider* pVdsSwProvider = NULL; IVdsPack* pVdsPack = NULL; IVdsVolume* pVdsVolume = NULL; ULONG ulFetched = 0; hResult = E_INVALIDARG; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); hResult = ppObjUnk->QueryInterface(IID_IVdsSwProvider, (void**)&pVdsSwProvider); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); hResult = pVdsSwProvider->QueryPacks(&pEnumVdsObject); hResult = E_INVALIDARG; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); hResult = ppObjUnk->QueryInterface(IID_IVdsPack, (void**)&pVdsPack); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); hResult = pVdsPack->QueryVolumes(&pEnumVdsObject); pEnumVdsObject->Reset(); hResult = E_INVALIDARG; ulFetched = 0; BOOL bDone = FALSE; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); //hResult = ppObjUnk->QueryInterface(IID_IVdsVolume, (void**)&pVdsVolume); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); _SafeRelease(pVdsPack); _SafeRelease(pVdsSwProvider); // hResult = pVdsVolume-AddAccessPath(TEXT("G:\")); if(SUCCEEDED(hResult)) AfxMessageBox(L"Add Access Path Successfully"); else AfxMessageBox(L"Unable to Add access path"); //UUID of IVdsVolumeMF {EE2D5DED-6236-4169-931D-B9778CE03DC6} static const GUID GUID_IVdsVolumeMF = {0xEE2D5DED, 0x6236, 4169,{0x93, 0x1D, 0xB9, 0x77, 0x8C, 0xE0, 0x3D, 0XC6} }; hResult = pService->GetObject(GUID_IVdsVolumeMF, VDS_OT_VOLUME, &ppObjUnk); if(hResult == VDS_E_OBJECT_NOT_FOUND) AfxMessageBox(L"Object Not found"); if(hResult == VDS_E_INITIALIZED_FAILED) AfxMessageBox(L"Initialization failed"); // pVdsVolume = reinterpret_cast(ppObjUnk); if(SUCCEEDED(hResult)) { // hResult = pVdsVolume-AddAccessPath(TEXT("G:\")); if(SUCCEEDED(hResult)) { IVdsAsync* ppVdsSync; AfxMessageBox(L"Formatting is about to Start......"); // hResult = pVdsVolume-Format(VDS_FST_UDF, TEXT("UDF_FORMAT_TEST"), 2048, TRUE, FALSE, FALSE, &ppVdsSync); if(SUCCEEDED(hResult)) AfxMessageBox(L"Formatting Started......."); else AfxMessageBox(L"Formatting Failed"); } else AfxMessageBox(L"Unable to Add Access Path"); } _SafeRelease(pVdsVolume); } else { AfxMessageBox(L"VDS Service Cannot be Loaded"); } } } _SafeRelease(pService);

    Read the article

  • Is there an API to remotely read a Windows machine's audit configuration?

    - by JCCyC
    I need to know, for each subcategory, whether it'll be audited on success, on failure, both, or none. Below is an example of the information I need to collect. Can I get this through WMI? Or if not, by other means, assuming I have proper (admin) credentials to the target machine? Again, to clarify, it's not the event log I need to read, it's the logging configuration. <security_state_change>AUDIT_SUCCESS</security_state_change> <security_system_extension>AUDIT_NONE</security_system_extension> <system_integrity>AUDIT_SUCCESS_FAILURE</system_integrity> <ipsec_driver>AUDIT_NONE</ipsec_driver> <other_system_events>AUDIT_SUCCESS_FAILURE</other_system_events> <logon>AUDIT_SUCCESS</logon> <logoff>AUDIT_SUCCESS</logoff> <account_lockout>AUDIT_SUCCESS</account_lockout> <ipsec_main_mode>AUDIT_NONE</ipsec_main_mode> <ipsec_quick_mode>AUDIT_NONE</ipsec_quick_mode> <ipsec_extended_mode>AUDIT_NONE</ipsec_extended_mode> <special_logon>AUDIT_SUCCESS</special_logon> <other_logon_logoff_events>AUDIT_NONE</other_logon_logoff_events> <file_system>AUDIT_NONE</file_system> <registry>AUDIT_NONE</registry> <kernel_object>AUDIT_NONE</kernel_object> <sam>AUDIT_NONE</sam> <certification_services>AUDIT_NONE</certification_services> <application_generated>AUDIT_NONE</application_generated> <handle_manipulation>AUDIT_NONE</handle_manipulation> <file_share>AUDIT_NONE</file_share> <filtering_platform_packet_drop>AUDIT_NONE</filtering_platform_packet_drop> <filtering_platform_connection>AUDIT_NONE</filtering_platform_connection> <other_object_access_events>AUDIT_NONE</other_object_access_events> <sensitive_privilege_use>AUDIT_NONE</sensitive_privilege_use> <non_sensitive_privlege_use>AUDIT_NONE</non_sensitive_privlege_use> <other_privlege_use_events>AUDIT_NONE</other_privlege_use_events> <process_creation>AUDIT_NONE</process_creation> <process_termination>AUDIT_NONE</process_termination> <dpapi_activity>AUDIT_NONE</dpapi_activity> <rpc_events>AUDIT_NONE</rpc_events> <audit_policy_change>AUDIT_SUCCESS</audit_policy_change> <authentication_policy_change>AUDIT_SUCCESS</authentication_policy_change> <authorization_policy_change>AUDIT_NONE</authorization_policy_change> <mpssvc_rule_level_policy_change>AUDIT_NONE</mpssvc_rule_level_policy_change> <filtering_platform_policy_change>AUDIT_NONE</filtering_platform_policy_change> <other_policy_change_events>AUDIT_NONE</other_policy_change_events> <user_account_management>AUDIT_SUCCESS</user_account_management> <computer_account_management>AUDIT_NONE</computer_account_management> <security_group_management>AUDIT_SUCCESS</security_group_management> <distribution_group_management>AUDIT_NONE</distribution_group_management> <application_group_management>AUDIT_NONE</application_group_management> <other_account_management_events>AUDIT_NONE</other_account_management_events> <directory_service_access>AUDIT_NONE</directory_service_access> <directory_service_changes>AUDIT_NONE</directory_service_changes> <directory_service_replication>AUDIT_NONE</directory_service_replication> <detailed_directory_service_replication>AUDIT_NONE</detailed_directory_service_replication> <credential_validation>AUDIT_NONE</credential_validation> <kerberos_ticket_events>AUDIT_NONE</kerberos_ticket_events> <other_account_logon_events>AUDIT_NONE</other_account_logon_events>

    Read the article

  • OAM OVD integration - Error Encounterd while performance test "LDAP response read timed out, timeout used:2000ms"

    - by siddhartha_sinha
    While working on OAM OVD integration for one of my client, I have been involved in the performance test of the products wherein I encountered OAM authentication failures while talking to OVD during heavy load. OAM logs revealed the following: oracle.security.am.common.policy.common.response.ResponseException: oracle.security.am.engines.common.identity.provider.exceptions.IdentityProviderException: OAMSSA-20012: Exception in getting user attributes for user : dummy_user1, idstore MyIdentityStore with exception javax.naming.NamingException: LDAP response read timed out, timeout used:2000ms.; remaining name 'ou=people,dc=oracle,dc=com' at oracle.security.am.common.policy.common.response.IdentityValueProvider.getUserAttribute(IdentityValueProvider.java:271) ... During the authentication and authorization process, OAM complains that the LDAP repository is taking too long to return user attributes.The default value is 2 seconds as can be seen from the exception, "2000ms". While troubleshooting the issue, it was found that we can increase the ldap read timeout in oam-config.xml.  For reference, the attribute to add in the oam-config.xml file is: <Setting Name="LdapReadTimeout" Type="xsd:string">2000</Setting> However it is not recommended to increase the time out unless it is absolutely necessary and ensure that back-end directory servers are working fine. Rather I took the path of tuning OVD in the following manner: 1) Navigate to ORACLE_INSTANCE/config/OPMN/opmn folder and edit opmn.xml. Search for <data id="java-options" ………> and edit the contents of the file with the highlighted items: <category id="start-options"><data id="java-bin" value="$ORACLE_HOME/jdk/bin/java"/><data id="java-options" value="-server -Xms1024m -Xmx1024m -Dvde.soTimeoutBackend=0 -Didm.oracle.home=$ORACLE_HOME -Dcommon.components.home=$ORACLE_HOME/../oracle_common -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/opt/bea/Middleware/asinst_1/diagnostics/logs/OVD/ovd1/ovdGClog.log -XX:+UseConcMarkSweepGC -Doracle.security.jps.config=$ORACLE_INSTANCE/config/JPS/jps-config-jse.xml"/><data id="java-classpath" value="$ORACLE_HOME/ovd/jlib/vde.jar$:$ORACLE_HOME/jdbc/lib/ojdbc6.jar"/></category></module-data><stop timeout="120"/><ping interval="60"/></process-type> When the system is busy, a ping from the Oracle Process Manager and Notification Server (OPMN) to Oracle Virtual Directory may fail. As a result, OPMN will restart Oracle Virtual Directory after 20 seconds (the default ping interval). To avoid this, consider increasing the ping interval to 60 seconds or more. 2) Navigate to ORACLE_INSTANCE/config/OVD/ovd1 folder.Open listeners.os_xml file and perform the following changes: · Search for <ldap id=”Ldap Endpoint”…….> and point the cursor to that line. · Change threads count to 200. · Change anonymous bind to Deny. · Change workQueueCapacity to 8096. Add a new parameter <useNIO> and set its value to false viz: <useNIO>false</useNio> Snippet: <ldap version="8" id="LDAP Endpoint"> ....... .......  <socketOptions><backlog>128</backlog>         <reuseAddress>false</reuseAddress>         <keepAlive>false</keepAlive>         <tcpNoDelay>true</tcpNoDelay>         <readTimeout>0</readTimeout>      </socketOptions> <useNIO>false</useNIO></ldap> Restart OVD server. For more information on OVD tuneup refer to http://docs.oracle.com/cd/E25054_01/core.1111/e10108/ovd.htm. Please Note: There were few patches released from OAM side for performance tune-up as well. Will provide the updates shortly !!!

    Read the article

  • NFS to NFS mount

    - by dude
    I have a machine that I need to bridge NFS files to. Can I mount an NFS directory on machine2 from machine1 and then mount the mounted NFS directory on machine2 on machine3 via NFS? Do you see any problems with that? I am basically bridging some subnet domains this way, in a certain fashion. My development machine is on a different and separate (unbridged) than where I would like to use the files, and I would like this machine1(dev machine) - machine2(passthrough machine) - machine3(test machine) connection. And no there is no way to move the test machine as it's a chassis :) and it's two buildings away.

    Read the article

  • How to Troubleshoot TFS Build Server Failure?

    - by Tarun Arora
    Ever found your self in this helpless situation where you think you have tried every possible suggestion on the internet to bring the build server back but it just won’t work. Well some times before hunting around for a solution it is important to understand what the problem is, if the error messages in the build logs don’t seem to help you can always enable tracing on the build server to get more information on what could possibly be the root cause of failure. In this blog post today I’ll be showing you how to enable tracing on, - TFS 2010/11 Server - Build Server - Client Enable Tracing on Team Foundation Server 2010/2011 On the Team Foundation Server navigate to C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services, right click web.config and from the context menu select edit.          Search for the <appSettings> node in the config file and set the value of the key ‘traceWriter’ to true.          In the <System.diagnostics> tag set the value of switches from 0 to 4 to set the trace level to maximum to write diagnostics level trace information.          Restart the TFS Application pool to force this change to take effect. The application pool restart will impact any one using the TFS server at present. Note - It is recommended that you do not make any changes to the TFS production application server, this can have serious consequences and can even jeopardize the installation of your server.          Download the Debug view tool from http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx and set it to capture “Global Events”. Perform any actions in the Team Explorer on the client machine, you should be able to see a series of trace data in the debug view tool now.         Enable Tracing on Build Controller/Agents Log on to the Build Controller/Agent and Navigate to the directory C:\Program Files\Microsoft Team Foundation Server 2010\Tools         Look for the configuration file ‘TFSBuildServiceHost.exe.config’ if it is not already there create a new text file and rename it to ‘TFSBuildServiceHost.exe.config’         To Enable tracing uncomment the <system.diagnostics> and paste the snippet below if it is not already there. <configuration> <system.diagnostics> <switches> <add name="BuildServiceTraceLevel" value="4"/> </switches> <trace autoflush="true" indentsize="4"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener, Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\logs\TFSBuildServiceHost.exe.log" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> </configuration> The highlighted path above is where the Log file will be created. If the folder is not already there then create the folder, also, make sure that the account running the build service has access to write to this folder.         Restart the build Controller/Agent service from the administration console (or net stop tfsbuildservicehost & net start tfsbuildservicehost) in order for the new setting to be picked up.         Enable TFS Tracing on the Client Machine On the client machine, shut down Visual Studio, navigate to C:\Program Files\Microsoft Visual Studio 10.0\Common 7\IDE          Search for devenv.exe.config, make a backup copy of the config file and right click the file and from the context menu select edit. If its not already there create this file.          Edit devenv.exe.config by adding the below code snippet before the last </configuration> tag <system.diagnostics> <switches> <add name="TeamFoundationSoapProxy" value="4" /> <add name="VersionControl" value="4" /> </switches> <trace autoflush="true" indentsize="3"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\tf.log" /> <add name="perfListener" type="Microsoft.TeamFoundation.Client.PerfTraceListener, Microsoft.TeamFoundation.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </listeners> </trace> </system.diagnostics> The highlighted path above is where the Log file will be created. If the folder is not already there then create the folder. Start Visual Studio and after a bit of activity you should be able to see the new log file being created on the folder specified in the config file. Other Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN.   Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Have you come across an interesting one to one with the build server, please share your experience here. Questions/Feedback/Suggestions, etc please leave a comment. Thank You! Share this post : CodeProject

    Read the article

  • Using CMS for App Configuration - Part 1, Deploying Umbraco

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/06/04/using-cms-for-app-configurationndashpart-1-deploy-umbraco.aspxSince my last post on using CMS for semi-static API content, How about a new platform for your next API… a CMS?, I’ve been using the idea for centralized app configuration, and this post is the first in a series that will walk through how to do that, step-by-step. The approach gives you a platform-independent, easily configurable way to specify your application configuration for different environments, with a built-in approval workflow, change auditing and the ability to easily rollback to previous settings. It’s like Azure Web and Worker Roles where you can specify settings that change at runtime, but it's not specific to Azure - you can use it for any app that needs changeable config, provided it can access the Internet. The series breaks down into four posts: Deploying Umbraco – the CMS that will store your configurable settings and the current values; Publishing your config – create a document type that encapsulates your settings and a template to expose them as JSON; Consuming your config – in .NET, a simple client that uses dynamic objects to access settings; Config lifecycle management – how to publish, audit, and rollback settings. Let’s get started. Deploying Umbraco There’s an Umbraco package on Azure Websites, so deploying your own instance is easy – but there are a couple of things to watch out for, so this step-by-step will put you in a good place. Create From Gallery The easiest way to get started is with an Azure subscription, navigate to add a new Website and then Create From Gallery. Under CMS, you’ll see an Umbraco package (currently at version 7.1.3): Configure Your App For high availability and scale, you’ll want your CMS on separate kit from anything else you have in Azure, so in the configuration of Umbraco I’d create a new SQL Azure database – which Umbraco will use to store all its content: You can use the free 20mb database option if you don’t have demanding NFRs, or if you’re just experimenting. You’ll need to specify a password for a SQL Server account which the Umbraco service will use, and changing from the default username umbracouser is probably wise. Specify Database Settings You can create a new database on an existing server if you have one, or create new. If you create a new server *do not* use the same username for the database server login as you used for the Umbraco account. If you do, the deployment will fail later. Think of this as the SQL Admin account that you can use for managing the db, the previous account was the service account Umbraco uses to connect. Make Tea If you have a fast kettle. It takes about two minutes for Azure to create and provision the website and the database. Install Umbraco So far we’ve deployed an empty instance of Umbraco using the Azure package, and now we need to browse to the site and complete installation. My Website was called my-app-config, so to complete installation I browse to http://my-app-config.azurewebsites.net:   Enter the credentials you want to use to login – this account will have full admin rights to the Umbraco instance. Note that between deploying your new Umbraco instance and completing installation in this step, anyone can browse to your website and complete the installation themselves with their own credentials, if they know the URL. Remote possibility, but it’s there. From this page *do not* click the big green Install button. If you do, Umbraco will configure itself with a local SQL Server CE database (.sdf file on the Web server), and ignore the SQL Azure database you’ve carefully provisioned and may be paying for. Instead, click on the Customize link and: Configure Your Database You need to enter your SQL Azure database details here, so you’ll have to get the server name from the Azure Management Console. You don’t need to explicitly grant access to your Umbraco website for the database though. Click Continue and you’ll be offered a “starter” website to install: If you don’t know Umbraco at all (but you are familiar with ASP.NET MVC) then a starter website is worthwhile to see how it all hangs together. But after a while you’ll have a bunch of artifacts in your CMS that you don’t want and you’ll have to work out which you can safely delete. So I’d click “No thanks, I do not want to install a starter website” and give yourself a clean Umbraco install. When it completes, the installation will log you in to the welcome screen for managing Umbraco – which you can access from http://my-app-config.azurewebsites.net/umbraco: That’s It Easy. Umbraco is installed, using a dedicated SQL Azure instance that you can separately scale, sync and backup, and ready for your content. In the next post, we’ll define what our app config looks like, and publish some settings for the dev environment.

    Read the article

  • Building the Ultimate SharePoint 2010 Development Environment

    - by Manesh Karunakaran
    It’s been more than a month since SharePoint 2010 RTMed. And a lot of people have downloaded and set up their very own SharePoint 2010 development rigs. And quite a few people have written blogs about setting up good development environments, there is even an MSDN article on it. Two of the blogs worth noting are from MVPs Sahil Malik and Wictor Wilén. Make sure that you check these out as well. Part of the bad side-effects of being a geek is the need to do the technical stuff the best way possible (pragmatic or otherwise), but the problem with this is that what is considered “best” is relative. Precisely the reason why you are reading this post now. Most of the posts that I read are out dated/need updations or are using the wrong OS’es or virtualization solutions (again, opinions vary) or using them the wrong way. Here’s a developer’s view of Building the Ultimate SharePoint 2010 Development Rig. If you are a sales guy, it’s time to close this window. Confusion 1: Which Host Operating System and Virtualization Solution to use? This point has been beaten to death in numerous blog posts in the past, if you have time to invest, read this excellent post by our very own SharePoint Joel on this subject. But if you are planning to build the Ultimate Development Rig, then Windows Server 2008 R2 with Hyper-V is the option that you should be looking at. I have been using this as my primary OS for about 6-7 months now, and I haven’t had any Driver issue or Application compatibility issue. In my experience all the Windows 7 drivers work fine with WIN2008 R2 also. You can enable Aero for eye candy (and the Windows 7 look and feel) and except for a few things like the Hibernation support (which a can be enabled if you really want it), Windows Server 2008 R2, is the best Workstation OS that I have used till date. But frankly the answer to this question of which OS to use depends primarily on one question - Are you willing to change your primary OS? If the answer to that is ‘Yes’, then Windows 2008 R2 with Hyper-V is the best option, if not look at vmWare or VirtualBox, both are equally good. Those who are familiar with a Virtual PC background might prefer Sun VirtualBox. Besides, these provide support for running 64 bit guest machines on 32 bit hosts if the underlying hardware is truly 64 bit. See my earlier post on this. Since we are going to make the ultimate rig, we will use Windows Server 2008 R2 with Hyper-V, for reasons mentioned above. Confusion 2: Should I use a multi-(virtual) server set up? A lot of people use multiple servers for their development environments - like Wictor Wilén is suggesting - one server hosting the Active directory, one hosting SharePoint Server and another one for SQL Server. True, this mimics the production environment the best possible way, but as somebody who has fallen for this set up earlier, I can tell you that you don’t really get anything by doing this. Microsoft has done well to ensure that if you can do it on one machine, you can do it in a farm environment as well. Besides, when you run multiple Server class machine instances in parallel, there are a lot of unwanted processor cycles wasted for no good use. In my personal experience, as somebody who needs to switch between MOSS 2007/SharePoint 2010 environments from time to time, the best possible solution is to Make the host Windows Server 2008 R2 machine your Domain Controller (AD Server) Make all your Virtual Guest OS’es join this domain. Have each Individual Guest OS Image have it’s own local SQL Server instance. The advantages are that you can reuse the users and groups in each of the Guest operating systems, you can manage the users in one place, AD is light weight and doesn't take too much resources on your host machine and also having separate SQL instances for each of the Development images gives you maximum flexibility in terms of configuration, for example your SharePoint rigs can have simpler DB configurations, compared to your MS BI blast pits. Confusion 3: Which Operating System should I use to run SharePoint 2010 Now that’s a no brainer. Use Windows 2008 R2 as your Guest OS. When you are building the ultimate rig, why compromise? If you are planning to run Windows Server 2008 as your Guest OS, there are a few patches that you need to install at different times during the installation, for that follow the steps mentioned here Okay now that we have made our choices, let’s get to the interesting part of building the rig, Step 1: Prepare the host machine – Install Windows Server 2008 R2 Install Windows Server 2008 R2 on your best Desktop/Laptop. If you have read this far, I am quite sure that you are somebody who can install an OS on your own, so go ahead and do that. Make sure that you run the compatibility wizard before you go ahead and nuke your current OS. There are plenty of blogs telling you how to make a good Windows 2008 R2 Workstation that feels and behaves like a Windows 7 machine, follow one and once you are done, head to Step 2. Step 2: Configure the host machine as a Domain Controller Before we begin this, let me tell you, this step is completely optional, you don’t really need to do this, you can simply use the local users on the Guest machines instead, but if this is a much cleaner approach to manage users and groups if you run multiple guest operating systems.  This post neatly explains how to configure your Windows Server 2008 R2 host machine as a Domain Controller. Follow those simple steps and you are good to go. If you are not able to get it to work, try this. Step 3: Prepare the guest machine – Install Windows Server 2008 R2 Open Hyper-V Manager Choose to Create a new Guest Operating system Allocate at least 2 GB of Memory to the Guest OS Choose the Windows 2008 R2 Installation Media Start the Virtual Machine to commence installation. Once the Installation is done, Activate the OS. Step 4: Make the Guest operating systems Join the Domain This step is quite simple, just follow these steps below, Fire up Hyper-V Manager, open your Guest OS Click on Start, and Right click on ‘Computer’ and choose ‘Properties’ On the window that pops-up, click on ‘Change Settings’ On the ‘System Properties’ Window that comes up, Click on the ‘Change’ button Now a window named ‘Computer Name/Domain Changes’ opens up, In the text box titled Domain, type in the Domain name from Step 2. Click Ok and windows will show you the welcome to domain message and ask you to restart the machine, click OK to restart. If the addition to domain fails, that means that you have not set up networking in Hyper-V for the Guest OS to communicate with the Host. To enable it, follow the steps I had mentioned in this post earlier. Step 5: Install SQL Server 2008 R2 on the Guest Machine SQL Server 2008 R2 gets installed with out hassle on Windows Server 2008 R2. SQL Server 2008 needs SP2 to work properly on WIN2008 R2. Also SQL Server 2008 R2 allows you to directly add PowerPivot support to SharePoint. Choose to install in SharePoint Integrated Mode in Reporting Server Configuration. Step 6: Install KB971831 and SharePoint 2010 Pre-requisites Now install the WCF Hotfix for Microsoft Windows (KB971831) from this location, and SharePoint 2010 Pre-requisites from the SP2010 Installation media. Step 7: Install and Configure SharePoint 2010 Install SharePoint 2010 from the installation media, after the installation is complete, you are prompted to start the SharePoint Products and Technologies Configuration Wizard. If you are using a local instance of Microsoft SQL Server 2008, install the Microsoft SQL Server 2008 KB 970315 x64 before starting the wizard. If your development environment uses a remote instance of Microsoft SQL Server 2008 or if it has a pre-existing installation of Microsoft SQL Server 2008 on which KB 970315 x64 has already been applied, this step is not necessary. With the wizard open, do the following: Install SQL Server 2008 KB 970315 x64. After the Microsoft SQL Server 2008 KB 970315 x64 installation is finished, complete the wizard. Alternatively, you can choose not to run the wizard by clearing the SharePoint Products and Technologies Configuration Wizard check box and closing the completed installation dialog box. Install SQL Server 2008 KB 970315 x64, and then manually start the SharePoint Products and Technologies Configuration Wizard by opening a Command Prompt window and executing the following command: C:\Program Files\Common Files\Microsoft Shared Debug\Web Server Extensions\14\BIN\psconfigui.exe The SharePoint Products and Technologies Configuration Wizard may fail if you are using a computer that is joined to a domain but that is not connected to a domain controller. Step 8: Install Visual Studio 2010 and SharePoint 2010 SDK Install Visual Studio 2010 Download and Install the Microsoft SharePoint 2010 SDK Step 9: Install PowerPivot for SharePoint and Configure Reporting Services Pop-In the SQLServer 2008 R2 installation media once again and install PowerPivot for SharePoint. This will get added as another instance named POWERPIVOT. Configure Reporting Services by following the steps mentioned here, if you need to get down to the details on how the integration between SharePoint 2010 and SQL Server 2008 R2 works, see Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010 an excellent article by Alan Le Marquand Step 10: Download and Install Sample Databases for Microsoft SQL Server 2008R2 SharePoint 2010 comes with a lot of cool stuff like PerformancePoint Services and BCS, if you need to try these out, you need to have data in your databases. So if you want to save yourself the trouble of creating sample data for your PerformancePoint and BCS experiments, download and install Sample Databases for Microsoft SQL Server 2008R2 from CodePlex. And you are done! Fire up your Visual Studio 2010 and Start Coding away!!

    Read the article

  • Rename Applications and Virtual Directories in IIS7

    - by AngelEyes
    from http://lanitdev.wordpress.com/2010/09/02/rename-applications-and-virtual-directories-in-iis7/   Rename Applications and Virtual Directories in IIS7 September 2, 2010 — Brian Grinstead Have you ever wondered why the box to change the name or “Alias” on an application or virtual directory is greyed out (see screenshot below)? I found a way to change the name without recreating all your settings. It uses the built in administration commands in IIS7, called appcmd. Renaming Applications In IIS7 Open a command prompt to see all of your applications. 1 C:> %systemroot%\system32\inetsrv\appcmd list app 2   3     APP "Default Web Site/OldApplicationName" 4     APP "Default Web Site/AnotherApplication" Run a command like this to change your “OldApplicationName” path to “NewApplicationName”. Now you can use http://localhost/newapplicationname 1 C:> %systemroot%\system32\inetsrv\appcmd set app "Default Web Site/OldApplicationName" -path:/NewApplicationName 2   3     APP object "Default Web Site/OldApplicationName" changed Renaming Virtual Directories In IIS7 Open a command prompt to see all of your virtual directories. 1 C:> %systemroot%\system32\inetsrv\appcmd list appcmd 2   3     VDIR "Default Web Site/OldApplicationName/Images" (physicalPath:\\server\images) 4     VDIR "Default Web Site/OldApplicationName/Data/Config" (physicalPath:\\server\config) We want to rename /Images to /Images2 and /Data/Config to /Data/Config2. Here are the example commands: 1 C:> %systemroot%\system32\inetsrv\appcmd set vdir "Default Web Site/OldApplicationName/Images" -path:/Images2 2   3     VDIR object "Default Web Site/OldApplicationName/Images" changed 4   5 C:> %systemroot%\system32\inetsrv\appcmd set vdir "Default Web Site/OldApplicationName/Data/Config" -path:/Data/Config2 6   7     VDIR object "Default Web Site/OldApplicationName/Data/Config" changed

    Read the article

  • Mac OS X Server Open Directory does not push Software Update settings to clients

    - by joxl
    I have an Xserve G5 running Mac OS X Server 10.5.8 configured as an Open Directory master. I have also enabled and configured Software Update service on the machine. The SUS is configured to serve Tiger, Leopard and Snow Leopard clients (see http://discussions.apple.com/message.jspa?messageID=10297359#10297359) The clients bound to the OD are a variety of Mac's running OS X 10.4, 10.5 or 10.6. In Workgroup Manager, I have created 3 machine groups for each client OS. Each group is configured with a custom SUS URL, and the managed client computers are members accordingly (see http://discussions.apple.com/thread.jspa?messageID=10493154#10493154) My problem is that the server pushes the SUS settings to some of the client machines, but not all. When I first configured all this stuff on the server (a few weeks ago) I was closely monitoring a few of the client machines to confirm that they received the custom settings. I noticed that some of the clients (10.4/5/6 alike) seemed to get the settings immediately, others didn't show the new settings until after a reboot. As I said, results are mixed across OS's, but some clients will not "sync" at all. My immediate thought was to unbind/rebind the problematic machines. I did this on several client computers with no success. For example, today I was working on one of the Tiger clients. I noticed it was not pointed at my local SUS, so I checked the OD binding; it was fine. Just to be sure I unbound the machine. Next, I checked WM and confirmed the computer record was gone. I noticed the machine group still had a residual (broken?) member from the unbound client; I manually removed this. Finally, I re-bound the client to OD and re-added the machine to it's correct group in WM. Unfortunately, the client still pings apple's SUS for updates. Just to play it safe I rebooted the client, but to no avail, it will not see my local SUS. To confirm that there is nothing wrong with the server, or the client's connection to it, forcefully pointed the machine at my SUS: sudo defaults write /Library/Preferences/com.apple.SoftwareUpdate CatalogURL "$LOCAL_SUS_URL" and the machine successfully updated off my local server. Great, successful updates, but problem not solved. I've done exhaustive reading on discussions.apple.com (not saying I read everything, I'm just saying I have read a lot) without a good answer. The discouraging thing is that a lot of OD problems I've read about only result in the sysadmin completely reinstalling the server, or OD, or some other similarly heavy-handed operation. At this point, I am not willing to go that route. I still have hope that I can find the reason for this flaky behavior. If anyone can point me in a helpful direction it would be much appreciated. EDIT: Indeed, some files are being pushed to the client: # from client machine: $ sudo find /Library -type f -name com.apple.SoftwareUpdate.plist /Library/Managed Preferences/com.apple.SoftwareUpdate.plist /Library/Managed Preferences/username/com.apple.SoftwareUpdate.plist /Library/Preferences/com.apple.SoftwareUpdate.plist A few weeks ago, prior to my (previously mentioned) modifications, the SUS was still running "stock". Which meant it could not serve SL (10.6) machines. At that time, the Software Update settings were setup in WM under User Groups. This didn't make any sense because some users work on multiple machines with different OS's. Before creating Machine Groups in WM, I deleted all the SU settings from the User Group Preferences. This just makes the whole thing more confusing, because when I see a file here: /Library/Managed Preferences/username/com.apple.SoftwareUpdate.plist I assume it's still remaining from the "old" settings, because I wouldn't think a Machine Setting belongs there. Despite all the com.apple.SoftwareUpdate.plist hanging around under the Managed Preferences, why does the client machine still call home to Apple and not my SUS? # on client machine: $ date Tue Jan 25 17:01:46 EST 2011 $ softwareupdate --list Software Update Tool Copyright 2002-2005 Apple No new software available. switch terminals... # on server: $ tail -n1 /var/log/swupd/swupd_access_log 10.x.x.x - - [25/Jan/2011:15:54:29 -0500] XXXX POST "/cgi-bin/SoftwareUpdateServerStats" 200 13 ... Notice the date of the client softwareupdate and the latest access to the SUS server; the server never heard a peep from that client.

    Read the article

  • Encrypted passwords for better security on server

    - by Ke
    Hi, I use wordpress and other CMS's and all these have plain text passwords in their config files e.g. in wp-config.php I wonder is this the normal way an administrator would protect security? I realise its possible to move the wp-config outside of the root web directory, but still if the server itself is compromised, its possible to find the wp-config file and the password inside, then the system is comprimised. Is there a way to encrypt all passwords on the system, so that in the web applications config files it uses the encrypted pass and not just plain text? Is there a sensible way of keeping plain-text passwords off the server? PS i use linux vps ubuntu servers Cheers Ke

    Read the article

  • How to migrate ASP.NET MVC 3 , MVC4 project to ASP.NET MVC5 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/16/how-to-migrate-asp.net-mvc-3--mvc4-project-to.aspxSoon you will see a new version of MVC5 in VS2013. MVC5 will be incorporated in VS2013. MVC3 will not be supported in VS2013. I confirmed it on channel9 last time. So People who have installed only VS2013 or doesn’t have old version will be got trouble with the project that is still in MVC3. This error happen because MVC4 and 5 installation doesn’t contain the DLL that is used in Version 3 of ASP.NET MVC.   Don’t be panic. You guys want to upgrade your project. Here is a trick  to solve the issue.   When you open the project you have seen that in Reference there is some dll that have yellow icon. This means that dll are missing or not found in your configuration or system.   Now remember that dll name. Remove them from reference and add them from adding reference. I telling you to remove so VS will not prevent you to add new version of same assembly. Add all those assembly. Those dll will be following : System.Web.Mvc Razor and Webpages Dll.   Remember that in MVC3 we use old version of these assembly. Now When you done by adding all assembly then now open web.config.   There is 2 web.config file in our mvc project.  One is in root folder and second in Views folder. You need to update all those version no. This is not a big deal if you know the name of assembly. Now if you web.config show you assembly version as 3.000.00 then 3 would be replaced with 4 or 5 according to version no. Same thing need to applied all dll for both web.config.   Note :- In VS Template Views goes in ~/Views folder but if someone use any other folder then Views for views and those folder have also web.config then remember to update them also. Your project will be compile and make no warning and error but that certainly not work. for examples areas/views and themes/views that contain web.config also need to be updated with newer assembly version no.   After done these thing you can compile your project and it will be work as it should be Thanks for read my post. Follow me on FB and Twitter to stay updated

    Read the article

  • IntelliSense for Razor Hosting in non-Web Applications

    - by Rick Strahl
    When I posted my Razor Hosting article a couple of weeks ago I got a number of questions on how to get IntelliSense to work inside of Visual Studio while editing your templates. The answer to this question is mainly dependent on how Visual Studio recognizes assemblies, so a little background is required. If you open a template just on its own as a standalone file by clicking on it say in Explorer, Visual Studio will open up with the template in the editor, but you won’t get any IntelliSense on any of your related assemblies that you might be using by default. It’ll give Intellisense on base System namespace, but not on your imported assembly types. This makes sense: Visual Studio has no idea what the assembly associations for the single file are. There are two options available to you to make IntelliSense work for templates: Add the templates as included files to your non-Web project Add a BIN folder to your template’s folder and add all assemblies required there Including Templates in your Host Project By including templates into your Razor hosting project, Visual Studio will pick up the project’s assembly references and make IntelliSense available for any of the custom types in your project and on your templates. To see this work I moved the \Templates folder from the samples from the Debug\Bin folder into the project root and included the templates in the WinForm sample project. Here’s what this looks like in Visual Studio after the templates have been included:   Notice that I take my original example and type cast the Context object to the specific type that it actually represents – namely CustomContext – by using a simple code block: @{ CustomContext Model = Context as CustomContext; } After that assignment my Model local variable is in scope and IntelliSense works as expected. Note that you also will need to add any namespaces with the using command in this case: @using RazorHostingWinForm which has to be defined at the very top of a Razor document. BTW, while you can only pass in a single Context 'parameter’ to the template with the default template I’ve provided realize that you can also assign a complex object to Context. For example you could have a container object that references a variety of other objects which you can then cast to the appropriate types as needed: @{ ContextContainer container = Context as ContextContainer; CustomContext Model = container.Model; CustomDAO DAO = container.DAO; } and so forth. IntelliSense for your Custom Template Notice also that you can get IntelliSense for the top level template by specifying an inherits tag at the top of the document: @inherits RazorHosting.RazorTemplateFolderHost By specifying the above you can then get IntelliSense on your base template’s properties. For example, in my base template there are Request and Response objects. This is very useful especially if you end up creating custom templates that include your custom business objects as you can get effectively see full IntelliSense from the ‘page’ level down. For Html Help Builder for example, I’d have a Help object on the page and assuming I have the references available I can see all the way into that Help object without even having to do anything fancy. Note that the @inherits key is a GREAT and easy way to override the base template you normally specify as the default template. It allows you to create a custom template and as long as it inherits from the base template it’ll work properly. Since the last post I’ve also made some changes in the base template that allow hooking up some simple initialization logic so it gets much more easy to create custom templates and hook up custom objects with an IntializeTemplate() hook function that gets called with the Context and a Configuration object. These objects are objects you can pass in at runtime from your host application and then assign to custom properties on your template. For example the default implementation for RazorTemplateFolderHost does this: public override void InitializeTemplate(object context, object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; // Just use the entire ConfigData as the model, but in theory // configData could contain many objects or values to set on // template properties this.Model = config.ConfigData as TModel; } to set up a strongly typed Model and the Request object. You can do much more complex hookups here of course and create complex base template pages that contain all the objects that you need in your code with strong typing. Adding a Bin folder to your Template’s Root Path Including templates in your host project works if you own the project and you’re the only one modifying the templates. However, if you are distributing the Razor engine as a templating/scripting solution as part of your application or development tool the original project is likely not available and so that approach is not practical. Another option you have is to add a Bin folder and add all the related assemblies into it. You can also add a Web.Config file with assembly references for any GAC’d assembly references that need to be associated with the templates. Between the web.config and bin folder Visual Studio can figure out how to provide IntelliSense. The Bin folder should contain: The RazorHosting.dll Your host project’s EXE or DLL – renamed to .dll if it’s an .exe Any external (bin folder) dependent assemblies Note that you most likely also want a reference to the host project if it contains references that are going to be used in templates. Visual Studio doesn’t recognize an EXE reference so you have to rename the EXE to DLL to make it work. Apparently the binary signature of EXE and DLL files are identical and it just works – learn something new everyday… For GAC assembly references you can add a web.config file to your template root. The Web.config file then should contain any full assembly references to GAC components: <configuration> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add assembly="System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <add assembly="System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </assemblies> </compilation> </system.web> </configuration> And with that you should get full IntelliSense. Note that if you add a BIN folder and you also have the templates in your Visual Studio project Visual Studio will complain about reference conflicts as it’s effectively seeing both the project references and the ones in the bin folder. So it’s probably a good idea to use one or the other but not both at the same time :-) Seeing IntelliSense in your Razor templates is a big help for users of your templates. If you’re shipping an application level scripting solution especially it’ll be real useful for your template consumers/users to be able to get some quick help on creating customized templates – after all that’s what templates are all about – easy customization. Making sure that everything is referenced in your bin folder and web.config is a good idea and it’s great to see that Visual Studio (and presumably WebMatrix/Visual Web Developer as well) will be able to pick up your custom IntelliSense in Razor templates.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Razor  

    Read the article

  • OSX 10.6 Cisco IPSEC strange behavior

    - by tair
    I'm trying to connect to Cisco IPSEC VPN of my company over DSL Internet. I managed to successfully connect using Cisco VPN Client, now I'm trying to switch to OSX 10.6 native client, because of licensing issues. The problems is that the connection fails with a dialog box containing the message: The negotiation with the VPN server failed. Verify the server address and try reconnecting. I checked logs: Jun 29 13:10:39 racoon[4551]: Connecting. Jun 29 13:10:39 racoon[4551]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 1). Jun 29 13:10:39 racoon[4551]: IKEv1 Phase1 AUTH: success. (Initiator, Aggressive-Mode Message 2). Jun 29 13:10:39 racoon[4551]: IKE Packet: receive success. (Initiator, Aggressive-Mode message 2). Jun 29 13:10:39 racoon[4551]: IKEv1 Phase1 Initiator: success. (Initiator, Aggressive-Mode). Jun 29 13:10:39 racoon[4551]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 3). Jun 29 13:10:42 racoon[4551]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:10:42 racoon[4551]: IKEv1 XAUTH: success. (XAUTH Status is OK). Jun 29 13:10:42 racoon[4551]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:10:42 racoon[4551]: IKEv1 Config: retransmited. (Mode-Config retransmit). Jun 29 13:10:42 racoon[4551]: IKE Packet: receive success. (MODE-Config). Jun 29 13:10:42 configd[19]: event_callback: Address added. previous interface setting (name: en1, address: 192.168.1.107), current interface setting (name: u92.168.54.147, subnet: 255.255.255.0, destination: 192.168.54.147). Jun 29 13:10:42 configd[19]: network configuration changed. Jun 29 13:10:42 vmnet-bridge[111]: Dynamic store changed Jun 29 13:10:42 named[62]: not listening on any interfaces Jun 29 13:10:58: --- last message repeated 1 time --- Jun 29 13:10:58 configd[19]: SCNCController: Disconnecting. (Connection tried to negotiate for, 16 seconds). Jun 29 13:10:58 racoon[4551]: IKE Packet: transmit success. (Information message). Jun 29 13:10:58 racoon[4551]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Jun 29 13:10:58 racoon[4551]: Disconnecting. (Connection tried to negotiate for, 19.113382 seconds). Jun 29 13:10:58 named[62]: not listening on any interfaces Jun 29 13:10:58 vmnet-bridge[111]: Dynamic store changed Jun 29 13:10:58 named[62]: not listening on any interfaces Jun 29 13:10:58 configd[19]: network configuration changed. Then I opened Terminal, started pinging a server behind VPN, and tried to connect again. Now connection is OK! Logs this time: Jun 29 13:46:53 racoon[8136]: Connecting. Jun 29 13:46:53 racoon[8136]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 1). Jun 29 13:46:53 racoon[8136]: IKEv1 Phase1 AUTH: success. (Initiator, Aggressive-Mode Message 2). Jun 29 13:46:53 racoon[8136]: IKE Packet: receive success. (Initiator, Aggressive-Mode message 2). Jun 29 13:46:53 racoon[8136]: IKEv1 Phase1 Initiator: success. (Initiator, Aggressive-Mode). Jun 29 13:46:53 racoon[8136]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 3). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:46:56 racoon[8136]: IKEv1 XAUTH: success. (XAUTH Status is OK). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:46:56 racoon[8136]: IKEv1 Config: retransmited. (Mode-Config retransmit). Jun 29 13:46:56 racoon[8136]: IKE Packet: receive success. (MODE-Config). Jun 29 13:46:56 configd[19]: event_callback: Address added. previous interface setting (name: en1, address: 192.168.1.107), current interface settinaddress: 192.168.54.149, subnet: 255.255.255.0, destination: 192.168.54.149). Jun 29 13:46:56 vmnet-bridge[111]: Dynamic store changed Jun 29 13:46:56 named[62]: not listening on any interfaces Jun 29 13:46:56 configd[19]: network configuration changed. Jun 29 13:46:56 named[62]: not listening on any interfaces Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Jun 29 13:46:56 racoon[8136]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Jun 29 13:46:56 racoon[8136]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Jun 29 13:46:56 racoon[8136]: Connected. Jun 29 13:46:56 configd[19]: SCNCController: Connected. I tested it several times and it consistently behaves the same. What is the magic?

    Read the article

  • How do I measure performance of a virtual server?

    - by Sergey
    I've got a VPS running Ubuntu. Being a virtual server, I understand that it shares resources with unknown number of other servers, and I'm noticing that it's considerably slower than my desktop machine. Is there some tool to measure the performance of the virtual machine? I'd be curious to see some approximate measure similar to bogomips, possibly for CPU (operations/sec), memory and disk read/write speed. I'd like to be able to compare those numbers to my desktop machine. I'm not interested in the specs of the actual physical machine my VPS is running on - by doing cat /proc/cpuinfo I can see that it's a nice quad-core Xeon machine, but it doesn't matter to me. I'm basically interested in how fast a program would run in my VPS - how many CPU operations it can make in a second, how many bytes to write to RAM or to disk. I only have ssh access to the machine so the tool need to be command-line. I could write a script which, say, does some calculations in a loop for a second and counts how many loops it was able to do, or something similar to measure disk and RAM performance. But I'm sure something like this already exists.

    Read the article

  • net share not working as expected ?

    - by Riduidel
    Hi, I'm creating a SMB share on a windows machine using the net share command. I write net share MyCompany_server___0.0.1-SNAPSHOT___server=E:\JavaWorkspace\Product\src\generated\server Then, this share is visible by other Windows machine in network shares of my machine, but not be accessed. Is there an option I forgot to set ? EDITED due to comment Accessing the share using an OpenSUSE machine brings the "The folder contents could not be displayed." message. Accessing the share using the Windows machine sharing that folder succeeds without any issue Accessing the share using another Windows machine displays MyCompany_server_0.0.1-SNAPSHOT_server is not accessible. Access refused

    Read the article

  • Windows 2012 Server Hyper-V: Cannot see LAN

    - by Samuel
    I have one NIC on the machine loaded XP on the Hyper-V and had chosen the network as virtual switch. No LAN and no internet shows up on the client. Am I missing something? it used to work in 2008-R2. Details: One network card on machine (Qualcomm Atheros AR8131 PIC-E Gigabit Ethernet controller) The virtual machine hard disk is pointing to and existing XP-SP2 hard disk created using VPC 2007 The Virtual machine Network Adapter is setup as Virtual Switch to the real ethernet controller with Enable virtual LAN identification set to 2 (no other virtual machine is created in the system) After the virtual machine boots LAN shows empty in Control Panel Network Connections (this is XP client) and I also cannot access the internet. XP is showing activation prompt but as far as I know it should not disable the network! Virtual network switch is set to External

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Upgrade ASP.NET MVC 2 Project to ASP.NET MVC 3

    - by mbridge
    ASP.NET MVC 3 can be installed side by side with ASP.NET MVC 2 on the same computer, which gives you flexibility in choosing when to upgrade an ASP.NET MVC 2 application to ASP.NET MVC 3. The simplest way to upgrade is to create a new ASP.NET MVC 3 project and copy all the views, controllers, code, and content files from the existing MVC 2 project to the new project and then to update the assembly references in the new project to match the old project. If you have made changes to the Web.config file in the MVC 2 project, you must also merge those changes with the Web.config file in the MVC 3 project. To manually upgrade an existing ASP.NET MVC 2 application to version 3, do the following: 1. In both Web.config files in the MVC 3 project, globally search and replace the MVC version. Find the following: System.Web.Mvc, Version=2.0.0.0 Replace it with the following System.Web.Mvc, Version=3.0.0.0 There are three changes in the root Web.config and four in the Views\Web.config file. 2. In Solution Explorer, delete the reference to System.Web.Mvc (which points to the version 2 DLL). Then add a reference to System.Web.Mvc (v3.0.0.0). 3. In Solution Explorer, right-click the project name and then select Unload Project. Then right-click again and select Edit ProjectName.csproj. 4. Locate the ProjectTypeGuids element and replace {F85E285D-A4E0-4152-9332-AB1D724D3325} with {E53F8FEA-EAE0-44A6-8774-FFD645390401}. 5. Save the changes and then right-click the project and select Reload Project. 6. If the project references any third-party libraries that are compiled using ASP.NET MVC 2, add the following highlighted bindingRedirect element to the Web.config file in the application root under the configuration section: <runtime>   <assemblyBinding >     <dependentAssembly>       <assemblyIdentity name="System.Web.Mvc"           publicKeyToken="31bf3856ad364e35"/>       <bindingRedirect oldVersion="2.0.0.0" newVersion="3.0.0.0"/>     </dependentAssembly>   </assemblyBinding> </runtime> Another ASP.NET MVC 3 article: - Rolling with Razor in MVC v3 Preview - Deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed - RenderAction with ASP.NET MVC 3 Sessionless Controllers

    Read the article

  • OSX 10.6 Cisco IPSEC strange behavior

    - by tair
    I'm trying to connect to Cisco IPSEC VPN of my company over DSL Internet. I managed to successfully connect using Cisco VPN Client, now I'm trying to switch to OSX 10.6 native client, because of licensing issues. The problems is that the connection fails with a dialog box containing the message: The negotiation with the VPN server failed. Verify the server address and try reconnecting. I checked logs: Jun 29 13:10:39 racoon[4551]: Connecting. Jun 29 13:10:39 racoon[4551]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 1). Jun 29 13:10:39 racoon[4551]: IKEv1 Phase1 AUTH: success. (Initiator, Aggressive-Mode Message 2). Jun 29 13:10:39 racoon[4551]: IKE Packet: receive success. (Initiator, Aggressive-Mode message 2). Jun 29 13:10:39 racoon[4551]: IKEv1 Phase1 Initiator: success. (Initiator, Aggressive-Mode). Jun 29 13:10:39 racoon[4551]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 3). Jun 29 13:10:42 racoon[4551]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:10:42 racoon[4551]: IKEv1 XAUTH: success. (XAUTH Status is OK). Jun 29 13:10:42 racoon[4551]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:10:42 racoon[4551]: IKEv1 Config: retransmited. (Mode-Config retransmit). Jun 29 13:10:42 racoon[4551]: IKE Packet: receive success. (MODE-Config). Jun 29 13:10:42 configd[19]: event_callback: Address added. previous interface setting (name: en1, address: 192.168.1.107), current interface setting (name: u92.168.54.147, subnet: 255.255.255.0, destination: 192.168.54.147). Jun 29 13:10:42 configd[19]: network configuration changed. Jun 29 13:10:42 vmnet-bridge[111]: Dynamic store changed Jun 29 13:10:42 named[62]: not listening on any interfaces Jun 29 13:10:58: --- last message repeated 1 time --- Jun 29 13:10:58 configd[19]: SCNCController: Disconnecting. (Connection tried to negotiate for, 16 seconds). Jun 29 13:10:58 racoon[4551]: IKE Packet: transmit success. (Information message). Jun 29 13:10:58 racoon[4551]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Jun 29 13:10:58 racoon[4551]: Disconnecting. (Connection tried to negotiate for, 19.113382 seconds). Jun 29 13:10:58 named[62]: not listening on any interfaces Jun 29 13:10:58 vmnet-bridge[111]: Dynamic store changed Jun 29 13:10:58 named[62]: not listening on any interfaces Jun 29 13:10:58 configd[19]: network configuration changed. Then I opened Terminal, started pinging a server behind VPN, and tried to connect again. Now connection is OK! Logs this time: Jun 29 13:46:53 racoon[8136]: Connecting. Jun 29 13:46:53 racoon[8136]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 1). Jun 29 13:46:53 racoon[8136]: IKEv1 Phase1 AUTH: success. (Initiator, Aggressive-Mode Message 2). Jun 29 13:46:53 racoon[8136]: IKE Packet: receive success. (Initiator, Aggressive-Mode message 2). Jun 29 13:46:53 racoon[8136]: IKEv1 Phase1 Initiator: success. (Initiator, Aggressive-Mode). Jun 29 13:46:53 racoon[8136]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 3). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:46:56 racoon[8136]: IKEv1 XAUTH: success. (XAUTH Status is OK). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:46:56 racoon[8136]: IKEv1 Config: retransmited. (Mode-Config retransmit). Jun 29 13:46:56 racoon[8136]: IKE Packet: receive success. (MODE-Config). Jun 29 13:46:56 configd[19]: event_callback: Address added. previous interface setting (name: en1, address: 192.168.1.107), current interface settinaddress: 192.168.54.149, subnet: 255.255.255.0, destination: 192.168.54.149). Jun 29 13:46:56 vmnet-bridge[111]: Dynamic store changed Jun 29 13:46:56 named[62]: not listening on any interfaces Jun 29 13:46:56 configd[19]: network configuration changed. Jun 29 13:46:56 named[62]: not listening on any interfaces Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Jun 29 13:46:56 racoon[8136]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Jun 29 13:46:56 racoon[8136]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Jun 29 13:46:56 racoon[8136]: Connected. Jun 29 13:46:56 configd[19]: SCNCController: Connected. I tested it several times and it consistently behaves the same. What is the magic?

    Read the article

  • Ubuntu 10.04 not detecting multiple monitors

    - by user28837
    I have 2 graphics cards, the output from the lspci: 01:00.0 VGA compatible controller: ATI Technologies Inc RV770 [Radeon HD 4850] 02:00.0 VGA compatible controller: ATI Technologies Inc RV710 [Radeon HD 4350] I have one monitor connected to the 4850 and 2 connected to the 4350. However when I go into System Preferences Monitors the only monitor shown is the one connected to the 4850. Is there something I need to enable for it to be able to use the other card? How do I get this to work. Thanks. As per request: X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-25-server i686 Ubuntu Current Operating System: Linux jeff-desktop 2.6.32-22-generic-pae #33-Ubuntu SMP Wed Apr 28 14:57:29 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-22-generic-pae root=UUID=852e1013-4ed6-40fd-a462-c29087888383 ro quiet splash Build Date: 23 April 2010 05:11:50PM xorg-server 2:1.7.6-2ubuntu7 (Bryce Harrington <[email protected]>) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue May 11 08:24:52 2010 (==) Using config file: "/etc/X11/xorg.conf" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (==) No Layout section. Using the first Screen section. (**) |-->Screen "Default Screen" (0) (**) | |-->Monitor "<default monitor>" (==) No device specified for screen "Default Screen". Using the first device section listed. (**) | |-->Device "Default Device" (==) No monitor specified for screen "Default Screen". Using a default monitor configuration. (==) Automatically adding devices (==) Automatically enabling devices (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. Entry deleted from font path. (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/75dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, /usr/share/fonts/X11/75dpi, /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, built-ins (==) ModulePath set to "/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. (II) Loader magic: 0x81f0e80 (II) Module ABI versions: X.Org ANSI C Emulation: 0.4 X.Org Video Driver: 6.0 X.Org XInput driver : 7.0 X.Org Server Extension : 2.0 (++) using VT number 7 (--) PCI:*(0:1:0:0) 1002:9442:174b:e104 ATI Technologies Inc RV770 [Radeon HD 4850] rev 0, Mem @ 0xc0000000/268435456, 0xfe7e0000/65536, I/O @ 0x0000a000/256, BIOS @ 0x????????/131072 (--) PCI: (0:2:0:0) 1002:954f:1462:1618 ATI Technologies Inc RV710 [Radeon HD 4350] rev 0, Mem @ 0xd0000000/268435456, 0xfe8e0000/65536, I/O @ 0x0000b000/256, BIOS @ 0x????????/131072 (WW) Open ACPI failed (/var/run/acpid.socket) (No such file or directory) (II) "extmod" will be loaded by default. (II) "dbe" will be loaded by default. (II) "glx" will be loaded. This was enabled by default and also specified in the config file. (II) "record" will be loaded by default. (II) "dri" will be loaded by default. (II) "dri2" will be loaded by default. (II) LoadModule: "glx" (II) Loading /usr/lib/xorg/extra-modules/modules/extensions/libglx.so (II) Module glx: vendor="FireGL - ATI Technologies Inc." compiled for 7.5.0, module version = 1.0.0 (II) Loading extension GLX (II) LoadModule: "extmod" (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so (II) Module extmod: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension MIT-SCREEN-SAVER (II) Loading extension XFree86-VidModeExtension (II) Loading extension XFree86-DGA (II) Loading extension DPMS (II) Loading extension XVideo (II) Loading extension XVideo-MotionCompensation (II) Loading extension X-Resource (II) LoadModule: "dbe" (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so (II) Module dbe: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension DOUBLE-BUFFER (II) LoadModule: "record" (II) Loading /usr/lib/xorg/modules/extensions/librecord.so (II) Module record: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.13.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension RECORD (II) LoadModule: "dri" (II) Loading /usr/lib/xorg/modules/extensions/libdri.so (II) Module dri: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 ABI class: X.Org Server Extension, version 2.0 (II) Loading extension XFree86-DRI (II) LoadModule: "dri2" (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so (II) Module dri2: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.1.0 ABI class: X.Org Server Extension, version 2.0 (II) Loading extension DRI2 (II) LoadModule: "fglrx" (II) Loading /usr/lib/xorg/extra-modules/modules/drivers/fglrx_drv.so (II) Module fglrx: vendor="FireGL - ATI Technologies Inc." compiled for 1.7.1, module version = 8.72.11 Module class: X.Org Video Driver (II) Loading sub module "fglrxdrm" (II) LoadModule: "fglrxdrm" (II) Loading /usr/lib/xorg/extra-modules/modules/linux/libfglrxdrm.so (II) Module fglrxdrm: vendor="FireGL - ATI Technologies Inc." compiled for 1.7.1, module version = 8.72.11 (II) ATI Proprietary Linux Driver Version Identifier:8.72.11 (II) ATI Proprietary Linux Driver Release Identifier: 8.723.1 (II) ATI Proprietary Linux Driver Build Date: Apr 8 2010 21:40:29 (II) Primary Device is: PCI 01@00:00:0 (WW) Falling back to old probe method for fglrx (II) Loading PCS database from /etc/ati/amdpcsdb (--) Assigning device section with no busID to primary device (WW) fglrx: No matching Device section for instance (BusID PCI:0@2:0:0) found (--) Chipset Supported AMD Graphics Processor (0x9442) found (WW) fglrx: No matching Device section for instance (BusID PCI:0@1:0:1) found (WW) fglrx: No matching Device section for instance (BusID PCI:0@2:0:1) found (**) ChipID override: 0x954F (**) Chipset Supported AMD Graphics Processor (0x954F) found (II) AMD Video driver is running on a device belonging to a group targeted for this release (II) AMD Video driver is signed (II) fglrx(0): pEnt->device->identifier=0x9428aa0 (II) pEnt->device->identifier=(nil) (II) fglrx(0): === [atiddxPreInit] === begin (II) Loading sub module "vgahw" (II) LoadModule: "vgahw" (II) Loading /usr/lib/xorg/modules/libvgahw.so (II) Module vgahw: vendor="X.Org Foundation" compiled for 1.7.6, module version = 0.1.0 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): Creating default Display subsection in Screen section "Default Screen" for depth/fbbpp 24/32 (**) fglrx(0): Depth 24, (--) framebuffer bpp 32 (II) fglrx(0): Pixel depth = 24 bits stored in 4 bytes (32 bpp pixmaps) (==) fglrx(0): Default visual is TrueColor (==) fglrx(0): RGB weight 888 (II) fglrx(0): Using 8 bits per RGB (==) fglrx(0): Buffer Tiling is ON (II) Loading sub module "fglrxdrm" (II) LoadModule: "fglrxdrm" (II) Reloading /usr/lib/xorg/extra-modules/modules/linux/libfglrxdrm.so ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 10, (OK) ukiOpenByBusid: ukiOpenMinor returns 10 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 10, (OK) ukiOpenByBusid: ukiOpenMinor returns 10 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 11, (OK) ukiOpenByBusid: ukiOpenMinor returns 11 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 (--) fglrx(0): Chipset: "ATI Radeon HD 4800 Series" (Chipset = 0x9442) (--) fglrx(0): (PciSubVendor = 0x174b, PciSubDevice = 0xe104) (==) fglrx(0): board vendor info: third party graphics adapter - NOT original ATI (--) fglrx(0): Linear framebuffer (phys) at 0xc0000000 (--) fglrx(0): MMIO registers at 0xfe7e0000 (--) fglrx(0): I/O port at 0x0000a000 (==) fglrx(0): ROM-BIOS at 0x000c0000 (II) fglrx(0): AC Adapter is used (II) fglrx(0): Primary V_BIOS segment is: 0xc000 (II) Loading sub module "vbe" (II) LoadModule: "vbe" (II) Loading /usr/lib/xorg/modules/libvbe.so (II) Module vbe: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.1.0 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): VESA BIOS detected (II) fglrx(0): VESA VBE Version 3.0 (II) fglrx(0): VESA VBE Total Mem: 16384 kB (II) fglrx(0): VESA VBE OEM: ATI ATOMBIOS (II) fglrx(0): VESA VBE OEM Software Rev: 11.13 (II) fglrx(0): VESA VBE OEM Vendor: (C) 1988-2005, ATI Technologies Inc. (II) fglrx(0): VESA VBE OEM Product: RV770 (II) fglrx(0): VESA VBE OEM Product Rev: 01.00 (II) fglrx(0): ATI Video BIOS revision 9 or later detected (--) fglrx(0): Video RAM: 524288 kByte, Type: GDDR3 (II) fglrx(0): PCIE card detected (--) fglrx(0): Using per-process page tables (PPPT) as GART. (WW) fglrx(0): board is an unknown third party board, chipset is supported (--) fglrx(0): Chipset: "ATI Radeon HD 4300/4500 Series" (Chipset = 0x954f) (--) fglrx(0): (PciSubVendor = 0x1462, PciSubDevice = 0x1618) (==) fglrx(0): board vendor info: third party graphics adapter - NOT original ATI (--) fglrx(0): Linear framebuffer (phys) at 0xd0000000 (--) fglrx(0): MMIO registers at 0xfe8e0000 (--) fglrx(0): I/O port at 0x0000b000 (==) fglrx(0): ROM-BIOS at 0x000c0000 (II) fglrx(0): AC Adapter is used (II) fglrx(0): Invalid ATI BIOS from int10, the adapter is not VGA-enabled (II) fglrx(0): ATI Video BIOS revision 9 or later detected (--) fglrx(0): Video RAM: 524288 kByte, Type: DDR2 (II) fglrx(0): PCIE card detected (--) fglrx(0): Using per-process page tables (PPPT) as GART. (WW) fglrx(0): board is an unknown third party board, chipset is supported (II) fglrx(0): Using adapter: 1:0.0. (II) fglrx(0): [FB] MC range(MCFBBase = 0xf00000000, MCFBSize = 0x20000000) (II) fglrx(0): Interrupt handler installed at IRQ 31. (II) fglrx(0): Using adapter: 2:0.0. (II) fglrx(0): [FB] MC range(MCFBBase = 0xf00000000, MCFBSize = 0x20000000) (II) fglrx(0): RandR 1.2 support is enabled! (II) fglrx(0): RandR 1.2 rotation support is enabled! (==) fglrx(0): Center Mode is disabled (II) Loading sub module "fb" (II) LoadModule: "fb" (II) Loading /usr/lib/xorg/modules/libfb.so (II) Module fb: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 ABI class: X.Org ANSI C Emulation, version 0.4 (II) Loading sub module "ddc" (II) LoadModule: "ddc" (II) Module "ddc" already built-in (II) fglrx(0): Finished Initialize PPLIB! (II) Loading sub module "ddc" (II) LoadModule: "ddc" (II) Module "ddc" already built-in (II) fglrx(0): Connected Display0: DFP on external TMDS [tmds2] (II) fglrx(0): Display0 EDID data --------------------------- (II) fglrx(0): Manufacturer: DEL Model: a038 Serial#: 810829397 (II) fglrx(0): Year: 2008 Week: 51 (II) fglrx(0): EDID Version: 1.3 (II) fglrx(0): Digital Display Input (II) fglrx(0): Max Image Size [cm]: horiz.: 53 vert.: 30 (II) fglrx(0): Gamma: 2.20 (II) fglrx(0): DPMS capabilities: StandBy Suspend Off (II) fglrx(0): Supported color encodings: RGB 4:4:4 YCrCb 4:4:4 (II) fglrx(0): Default color space is primary color space (II) fglrx(0): First detailed timing is preferred mode (II) fglrx(0): redX: 0.640 redY: 0.330 greenX: 0.300 greenY: 0.600 (II) fglrx(0): blueX: 0.150 blueY: 0.060 whiteX: 0.312 whiteY: 0.329 (II) fglrx(0): Supported established timings: (II) fglrx(0): 720x400@70Hz (II) fglrx(0): 640x480@60Hz (II) fglrx(0): 640x480@75Hz (II) fglrx(0): 800x600@60Hz (II) fglrx(0): 800x600@75Hz (II) fglrx(0): 1024x768@60Hz (II) fglrx(0): 1024x768@75Hz (II) fglrx(0): 1280x1024@75Hz (II) fglrx(0): Manufacturer's mask: 0 (II) fglrx(0): Supported standard timings: (II) fglrx(0): #0: hsize: 1152 vsize 864 refresh: 75 vid: 20337 (II) fglrx(0): #1: hsize: 1280 vsize 1024 refresh: 60 vid: 32897 (II) fglrx(0): #2: hsize: 1920 vsize 1080 refresh: 60 vid: 49361 (II) fglrx(0): Supported detailed timing: (II) fglrx(0): clock: 148.5 MHz Image Size: 531 x 298 mm (II) fglrx(0): h_active: 1920 h_sync: 2008 h_sync_end 2052 h_blank_end 2200 h_border: 0 (II) fglrx(0): v_active: 1080 v_sync: 1084 v_sync_end 1089 v_blanking: 1125 v_border: 0 (II) fglrx(0): Serial No: Y183D8CF0TFU (II) fglrx(0): Monitor name: DELL S2409W (II) fglrx(0): Ranges: V min: 50 V max: 76 Hz, H min: 30 H max: 83 kHz, PixClock max 170 MHz (II) fglrx(0): EDID (in hex): (II) fglrx(0): 00ffffffffffff0010ac38a055465430 (II) fglrx(0): 3312010380351e78eeee91a3544c9926 (II) fglrx(0): 0f5054a54b00714f8180d1c001010101 (II) fglrx(0): 010101010101023a801871382d40582c (II) fglrx(0): 4500132a2100001e000000ff00593138 (II) fglrx(0): 3344384346305446550a000000fc0044 (II) fglrx(0): 454c4c205332343039570a20000000fd (II) fglrx(0): 00324c1e5311000a2020202020200059 (II) fglrx(0): End of Display0 EDID data -------------------- (II) fglrx(0): Output DFP2 has no monitor section (II) fglrx(0): Output DFP_EXTTMDS has no monitor section (II) fglrx(0): Output CRT1 has no monitor section (II) fglrx(0): Output CRT2 has no monitor section (II) fglrx(0): Output DFP2 disconnected (II) fglrx(0): Output DFP_EXTTMDS connected (II) fglrx(0): Output CRT1 disconnected (II) fglrx(0): Output CRT2 disconnected (II) fglrx(0): Using exact sizes for initial modes (II) fglrx(0): Output DFP_EXTTMDS using initial mode 1920x1080 (II) fglrx(0): DPI set to (96, 96) (II) fglrx(0): Adapter ATI Radeon HD 4800 Series has 2 configurable heads and 1 displays connected. (==) fglrx(0): QBS disabled (==) fglrx(0): PseudoColor visuals disabled (II) Loading sub module "ramdac" (II) LoadModule: "ramdac" (II) Module "ramdac" already built-in (==) fglrx(0): NoAccel = NO (==) fglrx(0): NoDRI = NO (==) fglrx(0): Capabilities: 0x00000000 (==) fglrx(0): CapabilitiesEx: 0x00000000 (==) fglrx(0): OpenGL ClientDriverName: "fglrx_dri.so" (==) fglrx(0): UseFastTLS=0 (==) fglrx(0): BlockSignalsOnLock=1 (--) Depth 24 pixmap format is 32 bpp (II) Loading extension ATIFGLRXDRI (II) fglrx(0): doing swlDriScreenInit (II) fglrx(0): swlDriScreenInit for fglrx driver ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 17, (OK) ukiOpenByBusid: ukiOpenMinor returns 17 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 17, (OK) ukiOpenByBusid: ukiOpenMinor returns 17 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 (II) fglrx(0): [uki] DRM interface version 1.0 (II) fglrx(0): [uki] created "fglrx" driver at busid "PCI:1:0:0" (II) fglrx(0): [uki] added 8192 byte SAREA at 0x2000 (II) fglrx(0): [uki] mapped SAREA 0x2000 to 0xb6996000 (II) fglrx(0): [uki] framebuffer handle = 0x3000 (II) fglrx(0): [uki] added 1 reserved context for kernel (II) fglrx(0): swlDriScreenInit done (II) fglrx(0): Kernel Module Version Information: (II) fglrx(0): Name: fglrx (II) fglrx(0): Version: 8.72.11 (II) fglrx(0): Date: Apr 8 2010 (II) fglrx(0): Desc: ATI FireGL DRM kernel module (II) fglrx(0): Kernel Module version matches driver. (II) fglrx(0): Kernel Module Build Time Information: (II) fglrx(0): Build-Kernel UTS_RELEASE: 2.6.32-22-generic-pae (II) fglrx(0): Build-Kernel MODVERSIONS: yes (II) fglrx(0): Build-Kernel __SMP__: yes (II) fglrx(0): Build-Kernel PAGE_SIZE: 0x1000 (II) fglrx(0): [uki] register handle = 0x00004000 (II) fglrx(0): DRI initialization successfull! (II) fglrx(0): FBADPhys: 0xf00000000 FBMappedSize: 0x01068000 (II) fglrx(0): FBMM initialized for area (0,0)-(1920,2240) (II) fglrx(0): FBMM auto alloc for area (0,0)-(1920,1920) (front color buffer - assumption) (II) fglrx(0): Largest offscreen area available: 1920 x 320 (==) fglrx(0): Backing store disabled (II) Loading extension FGLRXEXTENSION (==) fglrx(0): DPMS enabled (II) fglrx(0): Initialized in-driver Xinerama extension (**) fglrx(0): Textured Video is enabled. (II) LoadModule: "glesx" (II) Loading /usr/lib/xorg/extra-modules/modules/glesx.so (II) Module glesx: vendor="X.Org Foundation" compiled for 1.7.1, module version = 1.0.0 (II) Loading extension GLESX (II) Loading sub module "xaa" (II) LoadModule: "xaa" (II) Loading /usr/lib/xorg/modules/libxaa.so (II) Module xaa: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.2.1 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): GLESX enableFlags = 94 (II) fglrx(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles Solid Horizontal and Vertical Lines Driver provided ScreenToScreenBitBlt replacement Driver provided FillSolidRects replacement (II) fglrx(0): GLESX is enabled (II) LoadModule: "amdxmm" (II) Loading /usr/lib/xorg/extra-modules/modules/amdxmm.so (II) Module amdxmm: vendor="X.Org Foundation" compiled for 1.7.1, module version = 1.0.0 (II) Loading extension AMDXVOPL (II) fglrx(0): UVD2 feature is available (II) fglrx(0): Enable composite support successfully (II) fglrx(0): X context handle = 0x1 (II) fglrx(0): [DRI] installation complete (==) fglrx(0): Silken mouse enabled (==) fglrx(0): Using HW cursor of display infrastructure! (II) fglrx(0): Disabling in-server RandR and enabling in-driver RandR 1.2. (--) RandR disabled (II) Found 2 VGA devices: arbiter wrapping enabled (II) Initializing built-in extension Generic Event Extension (II) Initializing built-in extension SHAPE (II) Initializing built-in extension MIT-SHM (II) Initializing built-in extension XInputExtension (II) Initializing built-in extension XTEST (II) Initializing built-in extension BIG-REQUESTS (II) Initializing built-in extension SYNC (II) Initializing built-in extension XKEYBOARD (II) Initializing built-in extension XC-MISC (II) Initializing built-in extension SECURITY (II) Initializing built-in extension XINERAMA (II) Initializing built-in extension XFIXES (II) Initializing built-in extension RENDER (II) Initializing built-in extension RANDR (II) Initializing built-in extension COMPOSITE (II) Initializing built-in extension DAMAGE ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 18, (OK) ukiOpenByBusid: ukiOpenMinor returns 18 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 18, (OK) ukiOpenByBusid: ukiOpenMinor returns 18 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 (II) AIGLX: Loaded and initialized /usr/lib/dri/fglrx_dri.so (II) GLX: Initialized DRI GL provider for screen 0 (II) fglrx(0): Enable the clock gating! (II) fglrx(0): Setting screen physical size to 507 x 285 (II) XKB: reuse xkmfile /var/lib/xkb/server-B20D7FC79C7F597315E3E501AEF10E0D866E8E92.xkm (II) config/udev: Adding input device Power Button (/dev/input/event1) (**) Power Button: Applying InputClass "evdev keyboard catchall" (II) LoadModule: "evdev" (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so (II) Module evdev: vendor="X.Org Foundation" compiled for 1.7.6, module version = 2.3.2 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 7.0 (**) Power Button: always reports core events (**) Power Button: Device: "/dev/input/event1" (II) Power Button: Found keys (II) Power Button: Configuring as keyboard (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Power Button (/dev/input/event0) (**) Power Button: Applying InputClass "evdev keyboard catchall" (**) Power Button: always reports core events (**) Power Button: Device: "/dev/input/event0" (II) Power Button: Found keys (II) Power Button: Configuring as keyboard (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Logitech USB-PS/2 Optical Mouse (/dev/input/event3) (**) Logitech USB-PS/2 Optical Mouse: Applying InputClass "evdev pointer catchall" (**) Logitech USB-PS/2 Optical Mouse: always reports core events (**) Logitech USB-PS/2 Optical Mouse: Device: "/dev/input/event3" (II) Logitech USB-PS/2 Optical Mouse: Found 12 mouse buttons (II) Logitech USB-PS/2 Optical Mouse: Found scroll wheel(s) (II) Logitech USB-PS/2 Optical Mouse: Found relative axes (II) Logitech USB-PS/2 Optical Mouse: Found x and y relative axes (II) Logitech USB-PS/2 Optical Mouse: Configuring as mouse (**) Logitech USB-PS/2 Optical Mouse: YAxisMapping: buttons 4 and 5 (**) Logitech USB-PS/2 Optical Mouse: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "Logitech USB-PS/2 Optical Mouse" (type: MOUSE) (II) Logitech USB-PS/2 Optical Mouse: initialized for relative axes. (II) config/udev: Adding input device Logitech USB-PS/2 Optical Mouse (/dev/input/mouse1) (II) No input driver/identifier specified (ignoring) (II) config/udev: Adding input device Logitech USB Multimedia Keyboard (/dev/input/event4) (**) Logitech USB Multimedia Keyboard: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Multimedia Keyboard: always reports core events (**) Logitech USB Multimedia Keyboard: Device: "/dev/input/event4" (II) Logitech USB Multimedia Keyboard: Found keys (II) Logitech USB Multimedia Keyboard: Configuring as keyboard (II) XINPUT: Adding extended input device "Logitech USB Multimedia Keyboard" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Logitech USB Multimedia Keyboard (/dev/input/event5) (**) Logitech USB Multimedia Keyboard: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Multimedia Keyboard: always reports core events (**) Logitech USB Multimedia Keyboard: Device: "/dev/input/event5" (II) Logitech USB Multimedia Keyboard: Found keys (II) Logitech USB Multimedia Keyboard: Configuring as keyboard (II) XINPUT: Adding extended input device "Logitech USB Multimedia Keyboard" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device KEYBOARD (/dev/input/event6) (**) KEYBOARD: Applying InputClass "evdev keyboard catchall" (**) KEYBOARD: always reports core events (**) KEYBOARD: Device: "/dev/input/event6" (II) KEYBOARD: Found keys (II) KEYBOARD: Configuring as keyboard (II) XINPUT: Adding extended input device "KEYBOARD" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device KEYBOARD (/dev/input/event7) (**) KEYBOARD: Applying InputClass "evdev keyboard catchall" (**) KEYBOARD: always reports core events (**) KEYBOARD: Device: "/dev/input/event7" (II) KEYBOARD: Found 14 mouse buttons (II) KEYBOARD: Found scroll wheel(s) (II) KEYBOARD: Found relative axes (II) KEYBOARD: Found keys (II) KEYBOARD: Configuring as mouse (II) KEYBOARD: Configuring as keyboard (**) KEYBOARD: YAxisMapping: buttons 4 and 5 (**) KEYBOARD: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "KEYBOARD" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (EE) KEYBOARD: failed to initialize for relative axes. (II) config/udev: Adding input device KEYBOARD (/dev/input/mouse2) (II) No input driver/identifier specified (ignoring) (II) config/udev: Adding input device Macintosh mouse button emulation (/dev/input/event2) (**) Macintosh mouse button emulation: Applying InputClass "evdev pointer catchall" (**) Macintosh mouse button emulation: always reports core events (**) Macintosh mouse button emulation: Device: "/dev/input/event2" (II) Macintosh mouse button emulation: Found 3 mouse buttons (II) Macintosh mouse button emulation: Found relative axes (II) Macintosh mouse button emulation: Found x and y relative axes (II) Macintosh mouse button emulation: Configuring as mouse (**) Macintosh mouse button emulation: YAxisMapping: buttons 4 and 5 (**) Macintosh mouse button emulation: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "Macintosh mouse button emulation" (type: MOUSE) (II) Macintosh mouse button emulation: initialized for relative axes. (II) config/udev: Adding input device Macintosh mouse button emulation (/dev/input/mouse0) (II) No input driver/identifier specified (ignoring) (II) fglrx(0): Restoring Recent Mode via PCS is not supported in RANDR 1.2 capable environments

    Read the article

  • How to get Virtual PC to recognize MIDI devices?

    - by bparker
    Hey all. I have an XP Pro virtual machine running inside Virtual PC 2007. My host machine is x64 Windows 7. I have a MIDI keyboard hooked up to my machine via a Turtle Beach USB to MIDI 1x1 cable. I have installed the driver and software on my host machine and ran a soundcheck, and everything appears to be working fine. Playback is sent to the MIDI device with no problems. However, when I attempt to install the driver and run a soundcheck in my XP virtual machine, the device is not found. Other USB devices (mouse, keyboard, flash drives) work fine in the virtual machine, but not they MIDI keyboard. I'm not sure what steps to take in order to troubleshoot the and get the VM to start recognizing the MIDI keyboard. Any help or suggestions would be greatly appreciated. Thanks.

    Read the article

  • ruby: invalid opcode

    - by adamo
    There's a fairly complex application than runs on two VMs (on Xen). Both VMs run CentOS 6.2 with the exact same packages and configuration for every application running (minus networking which is different). SELinux is disabled on both. On machine A the application builds perfectly. On machine B when running some tests we get: ruby[2010] trap invalid opcode ip:7ff9d2944c30 sp:7fff9797e0f8 error:0 in ld-2.12.so[7ff9d2930000+20000] Digging a bit more to find out where the machines differ, machine A has: model name : Six-Core AMD Opteron(tm) Processor 2423 HE and machine B: model name : AMD Opteron(TM) Processor 6272 I've tried booting machine B with cpuid_mask_cpu=fam_10_rev_c in grub but it did not help either. So any advice as to how to deal with this, or how to approach the hosting provider so as to run this VM on another physical machine will be greatly appreciated.

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >