Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 87/590 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • Synckolab will not start automatically

    - by EBV2010
    We have a Kolab server for e-mail, calendar and contacts with Thunderbird for a client. The add-ons are Lightning and Synckolab. The workstations are Kubuntu, most 10.04, some 11.10. It basically works but for one nagging problem: the automatic sync (that is, the settings that starts Synckolab upon start of Thunderbird and every x minutes as per setting) does not fire. We went through the whole routine: setting it, setting it to zero and back to any number of minutes, stopping/starting Thunderbird or the entire computer to make sure it sees and sets it. In the configuration console is reflects the changes. But still it will not automatically fire Synckolab. Manual syncs work without any problem (none that we've seen - it reflects all the added, changed etc calendar events). In short: Synckolab does not fire automatically with any setting who have thought of.

    Read the article

  • Grub gives messages about the boot sector being used by other software. What should I do?

    - by Bobble
    This only happens with one of my computers. It is an elderly laptop that has had a long and varied history with several operating systems, but in its retirement it is acting as a server for my home network using Ubuntu 12.04. It is a single-boot system, there are no other systems installed. Every so often, whenever there is a grub upgrade, I notice a message like this: Setting up grub-common (1.99-21ubuntu3.4) ... Installing new version of config file /etc/grub.d/00_header ... Setting up grub2-common (1.99-21ubuntu3.4) ... Setting up grub-pc-bin (1.99-21ubuntu3.4) ... Setting up grub-pc (1.99-21ubuntu3.4) ... /usr/sbin/grub-setup: warn: Sector 32 is already in use by FlexNet; avoiding it. This software may cause boot or other problems in future. Please ask its authors not to store data in the boot track. Installation finished. No error reported. Should I be worried about this? What (if anything) should I do about it?

    Read the article

  • itunes equalizer per track settings

    - by DA
    I'm a little confused about iTune's ability to set eq presets on a per-song basis. I have a song that I open, set to a given EQ preset, then close. When I play this song, however, the audio isn't being adjusted. If I open the EQ and turn it 'on', though, it then obeys the song's eq setting. Of course, if I leave the EQ on, then songs that do not have their own EQ setting default to whatever the EQ was originally set at. Not a HUGE deal, but it means that I would then normally have to have EQ always turned on, set to flat, so that any song that I HAVE given a EQ setting to will use it. Am I understanding how that works correctly? Is there a way to have individual songs use their EQ setting without having to turn on the EQ for everything?

    Read the article

  • How can I re-program/flash a backup of the bios?

    - by user285705
    I have some computers that have a particular bios setting that keeps everything running smoothly. The setting is not the default setting for the motherboard. So, when the CMOS battery dies, the setting is erased and causes the user problems. How can I backup the bios and settings I have now, and flash that file onto my entire stock of computers? I have attempted to use awdflash to backup my bios and then attempt to write that backup to the ROM chip, but I keep getting an error. It tells me that my file number doesn't match the system, or something like that. Basically, the file is incompatible with the chip. But I just backed it up from that chip. If anyone can shed some light on this for me it would be helpful.

    Read the article

  • Timeout Considerations for Solicit Response – Part 2

    - by Michael Stephenson
    To follow up a previous article about timeouts and how they can affect your application I have extended the sample we were using to include WCF. I will execute some test scenarios and discuss the results. The sample We begin by consuming exactly the same web service which is sitting on a remote server. This time I have created a .net 3.5 application which will consume the web service using the basichttp binding. To show you the configuration for the consumption of this web service please refer to the below diagram. You can see like before we also have the connectionManagement element in the configuration file. I have added a WCF service reference (also using the asynchronous proxy methods) and have the below code sample in the application which will asynchronously make the web service calls and handle the responses on a call back method invoked by a delegate. If you have read the previous article you will notice that the code is almost the same.   Sample 1 – WCF with Default Timeouts In this test I set about recreating the same scenario as previous where we would run the test but this time using WCF as the messaging component. For the first test I would use the default configuration settings which WCF had setup when we added a reference to the web service. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client-side trace is as follows:   The server-side trace is as follows: Some observations on the results are as follows: The timeouts happened quicker than in the previous tests because some calls were timing out before they attempted to connect to the server The first few calls that timed out did actually connect to the server and did execute successfully on the server   Test 2 – Increase Open Connection Timeout & Send Timeout In this test I wanted to increase both the send and open timeout values to try and give everything a chance to go through. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The client side trace for this test was   The server-side trace for this test was: Some observations on this test are: This test proved if the timeouts are high enough everything will just go through   Test 3 – Increase just the Send Timeout In this test we wanted to increase just the send timeout. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The below is the client side trace The below is the server side trace Some observations on this test are: In this test from both the client and server perspective everything ran through fine The open connection timeout did not seem to have any effect   Test 4 – Increase Just the Open Connection Timeout In this test I wanted to validate the change to the open connection setting by increasing just this on its own. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client side trace was The server side trace was Some observations on this test are: In this test you can see that the open connection which relates to opening the channel timeout increase was not the thing which stopped the calls timing out It's the send of data which is timing out On the server you can see that the successful few calls were fine but there were also a few calls which hit the server but timed out on the client You can see that not all calls hit the server which was one of the problems with the WSE and ASMX options   Test 5 – Smaller Increase in Send Timeout In this test I wanted to make a smaller increase to the send timeout than previous just to prove that it was the key setting which was controlling what was timing out. The timeout values for this test are: openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:02:30"   The Test We simulated 21 calls to the web service Test Results The client side trace was   The server side trace was Some observations on this test are: You can see that most of the calls got through fine On the client you can see that call 20 timed out but still hit the server and executed fine.   Summary At this point between the two articles we have quite a lot of scenarios showing the different way the timeout setting have played into our original performance issue, and now we can see how WCF could offer an improved way to handle the problem. To summarise the differences in the timeout properties for the three technology stacks: ASMX The timeout value only applies to the execution time of your request on the server. The timeout does not consider how long your code might be waiting client side to get a connection. WSE The timeout value includes both the time to obtain a connection and also the time to execute the request. A timeout will not be thrown as an error until an attempt to connect to the server is made. This means a 40 second timeout setting may not throw the error until 60 seconds when the connection to the server is made. If the connection to the server is made you should be aware that your message will be processed and you should design for this. WCF The WCF send timeout is the setting most equivalent to the settings we were looking at previously. Like WSE this setting the counter includes the time to get a connection as well as the time to execute on a server. Unlike WSE and ASMX an error will be thrown as soon as the send timeout from making your call from user code has elapsed regardless of whether we are waiting for a connection or have an open connection to the server. This may to a user appear to have better latency in getting an error response compared to WSE or ASMX.

    Read the article

  • Configuring Application/User Settings in WPF the easy way.

    - by mbcrump
    In this tutorial, we are going to configure the application/user settings in a WPF application the easy way. Most example that I’ve seen on the net involve the ConfigurationManager class and involve creating your own XML file from scratch. I am going to show you a easier way to do it. (in my humble opinion) First, the definitions: User Setting – is designed to be something specific to the user. For example, one user may have a requirement to see certain stocks, news articles or local weather. This can be set at run-time. Application Setting – is designed to store information such as a database connection string. These settings are read-only at run-time. 1) Lets create a new WPF Project and play with a few settings. Once you are inside VS, then paste the following code snippet inside the <Grid> tags. <Grid> <TextBox Height="23" HorizontalAlignment="Left" Margin="12,11,0,0" Name="textBox1" VerticalAlignment="Top" Width="285" Grid.ColumnSpan="2" /> <Button Content="Set Title" Name="button2" Click="button2_Click" Margin="108,40,96,114" /> <TextBlock Height="23" Name="textBlock1" Text="TextBlock" VerticalAlignment="Bottom" Width="377" /> </Grid> Basically, its just a Textbox, Button and TextBlock. The main Window should look like the following:   2) Now we are going to setup our Configuration Settings. Look in the Solution Explorer and double click on the Settings.settings file. Make sure that your settings file looks just like mine included below:   What just happened was the designer created an XML file and created the Settings.Designer.cs file which looks like this: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.1 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace WPFExam.Properties { [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] [global::System.CodeDom.Compiler.GeneratedCodeAttribute("Microsoft.VisualStudio.Editors.SettingsDesigner.SettingsSingleFileGenerator", "10.0.0.0")] internal sealed partial class Settings : global::System.Configuration.ApplicationSettingsBase { private static Settings defaultInstance = ((Settings)(global::System.Configuration.ApplicationSettingsBase.Synchronized(new Settings()))); public static Settings Default { get { return defaultInstance; } } [global::System.Configuration.UserScopedSettingAttribute()] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Configuration.DefaultSettingValueAttribute("ApplicationName")] public string ApplicationName { get { return ((string)(this["ApplicationName"])); } set { this["ApplicationName"] = value; } } [global::System.Configuration.ApplicationScopedSettingAttribute()] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Configuration.DefaultSettingValueAttribute("SQL_SRV342")] public string DatabaseServerName { get { return ((string)(this["DatabaseServerName"])); } } } } The XML File is named app.config and looks like this: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="userSettings" type="System.Configuration.UserSettingsGroup, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="WPFExam.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowExeDefinition="MachineToLocalUser" requirePermission="false" /> </sectionGroup> <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="WPFExam.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" /> </sectionGroup> </configSections> <userSettings> <WPFExam.Properties.Settings> <setting name="ApplicationName" serializeAs="String"> <value>ApplicationName</value> </setting> </WPFExam.Properties.Settings> </userSettings> <applicationSettings> <WPFExam.Properties.Settings> <setting name="DatabaseServerName" serializeAs="String"> <value>SQL_SRV342</value> </setting> </WPFExam.Properties.Settings> </applicationSettings> </configuration> 3) The only left now is the code behind the button. Double click the button and replace the MainWindow() method with the following code snippet. public MainWindow() { InitializeComponent(); this.Title = Properties.Settings.Default.ApplicationName; textBox1.Text = Properties.Settings.Default.ApplicationName; textBlock1.Text = Properties.Settings.Default.DatabaseServerName; } private void button2_Click(object sender, RoutedEventArgs e) { Properties.Settings.Default.ApplicationName = textBox1.Text.ToString(); Properties.Settings.Default.Save(); } Run the application and type something in the textbox and hit the Set Title button. Now, restart the application and you should see the text that you entered earlier.   If you look at the button2 click event, you will see that it was actually 2 lines of codes to save to the configuration file. I hope this helps, for more information consult MSDN.

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • Unity launcher Hebrew locale

    - by Gx1sptDTDa
    I am setting up a user account for someone wanting to use a Hebrew locale in Ubuntu 13.04. Everything nicely aligns to the right-hand side when using the Hebrew locale, except for the Unity launcher. This still sticks to the left-hand side, which sort of looks very odd in an otherwise right-hand side locale. See the screenshot Now I know moving the Unity launcher position is not possible normally, but one would expect that it automagically aligns to the right-hand side in a right-hand side locale. When setting up the account, I first set the locale to English (since I myself don't understand Hebrew), and only later changed it to Hebrew. Is this the "wrong" way of setting up a right-hand side user-account, or is this left-hand side launcher the expected behavior of Unity even in a right-hand side locale?

    Read the article

  • How do I increase the touchpad sensitivity on Acer Aspire One 532h?

    - by Yossi Farjoun
    I got a cheap netbook and put ubuntu on it without even booting into the windows it came with. I am now slightly regretting it since the trackpad is very annoying, it only registers when I press quite hard on it, and even then the motion is so slow that I must drag me poor finger 3 times to get the pointer to move up or down that tiny screen. OK. enough rant. now business: I went into the setting and increase sensitivity and acceleration to high. No difference. the behaviour did not change at all. So now my questions are: Is this a hardware problem? Is there a program I can run to see what input the trackpad is receiving? so I know if it can, in theory, read the light touch and not only the heavy, sandpaper-your-fingertips-off touch? is there some manual setting that the system might not be setting correctly and which I could change from the terminal?

    Read the article

  • At Symbol not working for apt get proxy authentication Ubuntu 11.10

    - by Shivhari
    I Have tried two things in three places to see if it works please do help me out. Two methords: 1) replacing @ with %40 2) replacing @ with \@ Three places: 1) export with the .bashrc file 2) editing /etc/apt/apt.conf and setting acquires there 3) using gconf editor and setting the values in /system/http_proxy and setting authentication name and password and checking the use_authentication checkbox. still there is no success and i still get 407 error when trying wget or apt-get update. please do help me, been stuck with this for three hours now. also, i read somewhere that creating a file in /etc/apt/apt.conf.d and then creating a 01proxy file with acquire might work. I tried that also, but it doesnt work. Please help.

    Read the article

  • How to make Nautilus name column display at the width of the longest filename

    - by fred.bear
    Prior to a recent glytch where I lost some of my custom settings, Nautilus would open directories and display the name-column wide enough to show the longest filename on display. It did it dynamically; If I renamed a file to a longer or shorter name, the column was automatically adjusted to the new width.. It's not doing that now, and I have no idea how to get this function back.. UPDATE I hadn't mentioned it, but I am talking about the List view... I have been fossicing around in the GConf Editpr and came across a setting for Compact View which does exactly what I am referring to... but it only effects the Compact View... that setting is: gconf-editor /apps/nautilus/compact_view/all_columns_have_same_width ...but I can't find the setting for List View... I know the capability exists, because I've been using it until only a few days ago, when my system got hung up...

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • Is there anyway to get the cyberphunks pidgin OTR plugin to be less verbose?

    - by Matt
    Is there anyway to get the cyberphunk piding OTR plugin to be less verbose? Sometimes I get hundreds of lines notifying me of the same thing... (6:40:56 PM) OTR Error: You sent encrypted data to anoynmous, who wasn't expecting it. (6:40:57 PM) Successfully refreshed the unverified conversation with [email protected]/Gaim939DD745. (6:40:57 PM) The last message to [email protected] was resent. (6:40:57 PM) Error setting up private conversation: Malformed message received (6:40:58 PM) OTR Error: You sent encrypted data to anoynmous, who wasn't expecting it. (6:40:58 PM) Successfully refreshed the unverified conversation with [email protected]/Gaim939DD745. (6:40:58 PM) The last message to [email protected] was resent. (6:40:59 PM) Error setting up private conversation: Malformed message received (6:40:59 PM) OTR Error: You sent encrypted data to anoynmous, who wasn't expecting it. (6:41:00 PM) Successfully refreshed the unverified conversation with [email protected]/Gaim939DD745. (6:41:00 PM) The last message to [email protected] was resent. (6:41:00 PM) Error setting up private conversation: Malformed message received (6:41:01 PM) OTR Error: You sent encrypted data to anoynmous, who wasn't expecting it. (6:41:01 PM) Attempting to refresh the private conversation with [email protected]/Gaim939DD745... (6:41:01 PM) Successfully refreshed the unverified conversation with [email protected]/Gaim939DD745. (6:41:01 PM) The last message to [email protected] was resent. (6:41:02 PM) OTR Error: You transmitted an unreadable encrypted message. (6:41:02 PM) Error setting up private conversation: Malformed message received (6:41:02 PM) OTR Error: You sent encrypted data to anoynmous, who wasn't expecting it. (6:41:02 PM) Successfully refreshed the unverified conversation with [email protected]/Gaim939DD745. (6:41:02 PM) The last message to [email protected] was resent. (6:41:03 PM) Error setting up private conversation: Malformed message received (6:41:03 PM) OTR Error: You transmitted an unreadable encrypted message. (6:41:03 PM) Successfully refreshed the unverified conversation with [email protected]/Gaim939DD745. (6:41:03 PM) The last message to [email protected] was resent. (6:41:03 PM) OTR Error: You sent encrypted data to anoynmous, who wasn't expecting it. (6:41:04 PM) Error setting up private conversation: Malformed message received (6:41:09 PM) Attempting to refresh the private conversation with [email protected]/Adium39B22966... (6:41:11 PM) Unverified conversation with [email protected]/Adium39B22966 started. (6:41:11 PM) The last message to [email protected] was resent. (6:41:11 PM) OTR Error: You transmitted an unreadable encrypted message. **(6:41:16 PM) Anonymous: yea** (6:41:20 PM) [email protected]/91A31EC5: yarrr (**6:41:25 PM) Anonymous: sup? (6:41:29 PM) [email protected]/91A31EC5: nothing (6:41:33 PM) [email protected]/91A31EC5: just speaking like a pirate (6:41:51 PM) Anonymous: word (7:13:47 PM) Anonymous: how late you workin? (9:18:12 PM) [email protected]/91A31EC5: not too late** (9:18:13 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:13 PM) Unverified conversation with [email protected]/Gaim939DD745 started. (9:18:13 PM) The last message to [email protected] was resent. (9:18:14 PM) Error setting up private conversation: Malformed message received (9:18:14 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:14 PM) Unverified conversation with [email protected]/Adium39B22966 started. (9:18:14 PM) The last message to [email protected] was resent. (9:18:15 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:15 PM) Unverified conversation with [email protected]/Gaim939DD745 started. (9:18:15 PM) The last message to [email protected] was resent. (9:18:15 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:16 PM) Successfully refreshed the unverified conversation with [email protected]/Gaim939DD745. (9:18:16 PM) The last message to [email protected] was resent. (9:18:16 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:17 PM) Successfully refreshed the unverified conversation with [email protected]/Gaim939DD745. (9:18:17 PM) The last message to [email protected] was resent. (9:18:17 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:18 PM) Successfully refreshed the unverified conversation with [email protected]/Gaim939DD745. (9:18:18 PM) The last message to [email protected] was resent. (9:18:18 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:19 PM) Unverified conversation with [email protected]/Adium39B22966 started. (9:18:19 PM) The last message to [email protected] was resent. (9:18:19 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:20 PM) Successfully refreshed the unverified conversation with [email protected]/Adium39B22966. (9:18:20 PM) The last message to [email protected] was resent. (9:18:20 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:20 PM) Attempting to refresh the private conversation with [email protected]/Gaim939DD745... (9:18:21 PM) Unverified conversation with [email protected]/Gaim939DD745 started. (9:18:21 PM) The last message to [email protected] was resent. (9:18:21 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:21 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:21 PM) Successfully refreshed the unverified conversation with [email protected]/Gaim939DD745. (9:18:21 PM) The last message to [email protected] was resent. (9:18:21 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:22 PM) We received an unreadable encrypted message from [email protected]. (9:18:23 PM) The last message to [email protected] was resent. (9:18:24 PM) OTR Error: You transmitted an unreadable encrypted message. (9:18:25 PM) We received an unreadable encrypted message from [email protected]. (9:18:26 PM) Unverified conversation with [email protected]/Adium39B22966 started. (9:18:26 PM) The last message to [email protected] was resent.

    Read the article

  • Property being immediately reset by ApplicationSetting Property Binding

    - by Slider345
    I have a .net 2.0 windows application written in c#, which currently uses several project settings to store user configurations. The forms in the application are made up of lots of user controls, each of which have properties that need to be set to these project settings. Right now these settings are manually assigned to the user control properties. I was hoping to simplify the code by replacing the manual implementation with ApplicationSettings Property Bindings. However, my first property is not behaving properly at all. The setting is an integer, used to record a port number typed into a text box. The setting is bound to an integer property on a user control, and that property sets the Text property on a TextBox control. When I type a new value into the textbox at runtime, as soon as the textbox loses focus, it is immediately replaced by the original value. A breakpoint on the property shows that it is immediately setting the property to the setting from the properties collection after I set it. Can anyone see what I'm doing wrong? Here's some code: The setting: [global::System.Configuration.UserScopedSettingAttribute()] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Configuration.DefaultSettingValueAttribute("1000")] public int Port { get{ return ((int)(this["Port"])); } set{ this["Port"] = value; } } The binding: this.ctrlNetworkConfig.DataBindings.Add(new System.Windows.Forms.Binding("PortNumber", global::TestProject.Properties.Settings.Default, "Port", true, System.Windows.Forms.DataSourceUpdateMode.OnPropertyChanged)); this.ctrlNetworkConfig.PortNumber = global::TestProject.Properties.Settings.Default.Port; And lastly, the property on the user control: public int PortNumber { get{ int port; if(int.TryParse(this.txtPortNumber.Text, out port)) return port; else return 0; } set{ txtPortNumber.Text = value.ToString(); } } Any thoughts? Thanks in advance for your help. EDIT: Sorry about the formatting, trying to correct.

    Read the article

  • Generating scala AST for recursive method.

    - by scout
    I am generating the scala AST using the following code: val setting = new Settings(error) val reporter = new ConsoleReporter(setting, in, out) { override def displayPrompt = () } val compiler = new Global(setting, reporter) with ASTExtractor{ override def onlyPresentation = true } //setting.PhasesSetting("parser", "parserPhase") val run = new compiler.Run val sourceFiles:List[String] = List("Test.scala") run.compile(sourceFiles.toList) I guess this is the standard code used to run the compiler in the code and generate the AST to work with. The above code worked fine for any valid scala code in Test.scala till now. When I use a recursive function in Test.scala, like def xMethod(x:Int):Int = if(x == 0) -1 else xMethod(x-1) It gives me a java.lang.NullPointerException. The top few lines of the stack trace look like this at scala.tools.nsc.typechecker.Typers$Typer.checkNoDoubleDefsAndAddSynthetics$1(Typers.scala:2170) at scala.tools.nsc.typechecker.Typers$Typer.typedStats(Typers.scala:2196) at scala.tools.nsc.typechecker.Typers$Typer.typedBlock(Typers.scala:1951) at scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:3815) at scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:4124) at scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:4177) at scala.tools.nsc.transform.TailCalls$TailCallElimination.transform(TailCalls.scala:199) The code works fine for a method like def aMethod(c:Int):Int = { bMethod(c) } def bMethod(x:Int):Int = aMethod(x) Please let me know if recursive functions need any other setting.

    Read the article

  • Binding the textDidChange event on a NSTextField to a MacRuby delegate

    - by kolrie
    I have a NSTextField within a Window and I created a very simple MacRuby delegate: class ServerInputDelegate attr_accessor :parent def textDidChange(notification) NSLog notification.inspect parent.filter end end And I have tried setting the control's delegate: I have tried setting the Window and every other object I could think of to this delegate. I have also tried setting it to other delegates (application for instance) and events like applicationDidFinishLaunching are being properly triggered. Is there any trick I am missing in order for this event to be triggered every time the contents of this NSTextField changes?

    Read the article

  • Optimizing website - minification, sprites, etc...

    - by nivlam
    I'm looking at the product Aptimize Website Accelerator, which is an ISAPI filter that will concatenate files, minify css/javascript, and more. Does anyone have experience with this product, or any other "all-in-one" solutions? I'm interesting in knowing whether something like this would be good long-term, or would manually setting up all the components (integrate YUICompress into the build process, setting up gzip compression, tweaking expiration headers, etc...) be more beneficial? An all-in-one solution like this looks very tempting, as it could save a lot of time if our website is "less than optimal". But how efficient are these products? Would setting up the components manually generate better results? Or would the gap between the all-in-one solution and manually setting up the component be so small, that it's negligible?

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • How use applicationSettings in the new web.config configuration in VS2010?

    - by citronas
    I'm used to use web deployment projects. Currently I am developing a new web application with VS2010 and want to try to get along with the new web.config principle and deployment issues. How can I replace a simple setting like <applicationSettings> <NAMESPACE> <setting name="Testenvironment" serializeAs="String"> <value>True</value> </setting> </NAMESPACE> </applicationSettings> I want to have this setting to be set to True in Debug, and false in Release. How must the entries in the Web.Debug.config and Web.Release.Config look like? And by the way: Is there some documentation about the new web.config issue? Can't seem to google for the correct keywords.

    Read the article

  • Changing a formwindows backcolor from a panel

    - by JPJedi
    I have a panel that is docked to the right side of a windows form. I set a color with a setting on the panel but I want to then update the main form (not the panel) backcolor to the new setting. But when I use me.backcolor = setting it changes the panels backcolor. This is also with VB.net windows forms with Visual Studios 2008 Thanks

    Read the article

  • .NET Settings File Available Types

    - by Mashmagar
    When modifying a .NET settings file, I'm given a choice of types for a setting. However, not all of the types accessible by my project appear, even in the 'Browse' window. What determines if a type can be used for a setting file setting? I have a type I created that I would like to be able to save, and I want to know what I need to change about it to use it in a settings file. (VS 2008 - .Net 3.5)

    Read the article

  • Settings.bundle Not Appearing on Application Updates

    - by coneybeare
    I have an app that does not currently use a Setting.bundle to display setting in the iPhone Settings app. I am releasing an update that does. On a fresh install, the settings are added to the Setting App as expected, but upon updating from the old install, the bundle is not added. Is there some sort of trick to get the Settings App to show my Settings.bundle?

    Read the article

  • How to hide jQuery Sub-Menus(ddsmoothmenu)?

    - by Tim
    I'm new to jQuery and i must admit that i've understood nothing yet, the syntax appears to me as an unknown language although i thought that i had my experiences with javascript. Nevertheless i managed it to implement this menu in my asp.net masterpage's header. Even got it to work that the content-page is loaded with ajax with help from here. But finally i'm failing with the menu to disappear when the new page was loaded asynchronously. I dont know how to hide this accursed jQuery Menu. Following the part of the js-file where the events are registered for hiding/disappearing. I dont know how to get the part that is responsible for it and even i dont know how to implement that part in my Anchor-onclick function where i dont have a reference to the jQuery Object. buildmenu:function($, setting){ var smoothmenu=ddsmoothmenu var $mainmenu=$("#"+setting.mainmenuid+">ul") //reference main menu UL $mainmenu.parent().get(0).className=setting.classname || "ddsmoothmenu" var $headers=$mainmenu.find("ul").parent() $headers.hover( function(e){ $(this).children('a:eq(0)').addClass('selected') }, function(e){ $(this).children('a:eq(0)').removeClass('selected') } ) $headers.each(function(i){ //loop through each LI header var $curobj=$(this).css({zIndex: 100-i}) //reference current LI header var $subul=$(this).find('ul:eq(0)').css({display:'block'}) $subul.data('timers', {}) this._dimensions={w:this.offsetWidth, h:this.offsetHeight, subulw:$subul.outerWidth(), subulh:$subul.outerHeight()} this.istopheader=$curobj.parents("ul").length==1? true : false //is top level header? $subul.css({top:this.istopheader && setting.orientation!='v'? this._dimensions.h+"px" : 0}) $curobj.children("a:eq(0)").css(this.istopheader? {paddingRight: smoothmenu.arrowimages.down[2]} : {}).append( //add arrow images '<img src="'+ (this.istopheader && setting.orientation!='v'? smoothmenu.arrowimages.down[1] : smoothmenu.arrowimages.right[1]) +'" class="' + (this.istopheader && setting.orientation!='v'? smoothmenu.arrowimages.down[0] : smoothmenu.arrowimages.right[0]) + '" style="border:0;" />' ) if (smoothmenu.shadow.enable){ this._shadowoffset={x:(this.istopheader?$subul.offset().left+smoothmenu.shadow.offsetx : this._dimensions.w), y:(this.istopheader? $subul.offset().top+smoothmenu.shadow.offsety : $curobj.position().top)} //store this shadow's offsets if (this.istopheader) $parentshadow=$(document.body) else{ var $parentLi=$curobj.parents("li:eq(0)") $parentshadow=$parentLi.get(0).$shadow } this.$shadow=$('<div class="ddshadow'+(this.istopheader? ' toplevelshadow' : '')+'"></div>').prependTo($parentshadow).css({left:this._shadowoffset.x+'px', top:this._shadowoffset.y+'px'}) //insert shadow DIV and set it to parent node for the next shadow div } $curobj.hover( function(e){ var $targetul=$subul //reference UL to reveal var header=$curobj.get(0) //reference header LI as DOM object clearTimeout($targetul.data('timers').hidetimer) $targetul.data('timers').showtimer=setTimeout(function(){ header._offsets={left:$curobj.offset().left, top:$curobj.offset().top} var menuleft=header.istopheader && setting.orientation!='v'? 0 : header._dimensions.w menuleft=(header._offsets.left+menuleft+header._dimensions.subulw>$(window).width())? (header.istopheader && setting.orientation!='v'? -header._dimensions.subulw+header._dimensions.w : -header._dimensions.w) : menuleft //calculate this sub menu's offsets from its parent if ($targetul.queue().length<=1){ //if 1 or less queued animations $targetul.css({left:menuleft+"px", width:header._dimensions.subulw+'px'}).animate({height:'show',opacity:'show'}, ddsmoothmenu.transition.overtime) if (smoothmenu.shadow.enable){ var shadowleft=header.istopheader? $targetul.offset().left+ddsmoothmenu.shadow.offsetx : menuleft var shadowtop=header.istopheader?$targetul.offset().top+smoothmenu.shadow.offsety : header._shadowoffset.y if (!header.istopheader && ddsmoothmenu.detectwebkit){ //in WebKit browsers, restore shadow's opacity to full header.$shadow.css({opacity:1}) } header.$shadow.css({overflow:'', width:header._dimensions.subulw+'px', left:shadowleft+'px', top:shadowtop+'px'}).animate({height:header._dimensions.subulh+'px'}, ddsmoothmenu.transition.overtime) } } }, ddsmoothmenu.showhidedelay.showdelay) }, function(e){ var $targetul=$subul var header=$curobj.get(0) clearTimeout($targetul.data('timers').showtimer) $targetul.data('timers').hidetimer=setTimeout(function(){ $targetul.animate({height:'hide', opacity:'hide'}, ddsmoothmenu.transition.outtime) if (smoothmenu.shadow.enable){ if (ddsmoothmenu.detectwebkit){ //in WebKit browsers, set first child shadow's opacity to 0, as "overflow:hidden" doesn't work in them header.$shadow.children('div:eq(0)').css({opacity:0}) } header.$shadow.css({overflow:'hidden'}).animate({height:0}, ddsmoothmenu.transition.outtime) } }, ddsmoothmenu.showhidedelay.hidedelay) } ) //end hover }) //end $headers.each() $mainmenu.find("ul").css({display:'none', visibility:'visible'}) } one link of my menu what i want to hide when the content is redirected to another page(i need "closeMenu-function"): <li><a href="DeliveryControl.aspx" onclick="AjaxContent.getContent(this.href);closeMenu();return false;">Delivery Control</a></li> In short: I want to fade out the submenus the same way they do automatically onblur, so that only the headermenu stays visible but i dont know how. Thanks, Tim EDIT: thanks to Starx' private-lesson in jQuery for beginners i solved it: I forgot the # in $("#smoothmenu1"). After that it was not difficult to find and call the hover-function from the menu's headers to let them fade out smoothly: $("#smoothmenu1").find("ul").hover(); Regards, Tim

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >