Search Results

Search found 22546 results on 902 pages for 'mode management'.

Page 272/902 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • Color highlights have vanished in emacs

    - by Jim Kiley
    I'm using emacs on a remote Linux server that I access via ssh. I'm editing C files that have a non-standard suffix, so I have had to manually enter c-mode with M-x c-mode every time I open one of those files. I found this to be annoying so I started monkeying with my .emacs to make that problem go away. This made all the color highlights in c-mode go away instead. Correction: All my color highlights are gone. I've removed the .emacs file, logged out and logged back in, but now, the color highlights are gone. I miss them! They were very helpful How do I get them back?

    Read the article

  • Unloading uwsgi dynamic app

    - by Gevious
    I have a uWSGI setup where I run dynamic mode and add applications to it. All applications are working off the same codebase, but each have their own settings file. Its working beautifully. Say for instance, i want to change a setting for one app that has already been loaded. Is there a way I can get uwsgi to reload the app, rather than restarting the whole uwsgi server? In emperor mode I could just touch the config file. How do I achieve the equivalent result in dynamic mode?

    Read the article

  • ESXI with non standard hardware HDD issues

    - by Hurricanepkt
    I have 3 very underutilized servers that I am condensing to one of those shuttle PC's with VMWare ESXi The HDD seems to be the bottle neck right now (the light is almost always pure solid) right now I have a single 1TB Seagate 7200.11 connected by SATA. VMWare ESXi cannot detect it when running in AHCI mode, but does when running in IDE mode. I have read that IDE mode can give a 5% performance hit which might give me enough breathing room. However, I am open to setting up an external eSATA or some sort of raid to give me more than just the 5%. I am just weary of sinking some money into a bit of hardware without knowledge of whether it will work. Does anyone know of resources or procedures of how to get this working.

    Read the article

  • IIS FTP server not working after purchase of SSL certificate

    - by Chris
    I've been connecting to my web server with active mode in FileZilla with no problems. Over the weekend, an SSL certificate was dropped into a folder that I access with FTP, and which contains files for the website. Now I am receiving a 425 error in active mode on the FTP root, so I can't really do anything but log in. In passive mode, I can connect and move around in the directory tree, but the connection seems shaky. Occasionally I'll time out, and I can't get access at all to the folder containing the SSL certificate. My question is how does the SSL certificate affect my FTP connection (if at all)? Does its presence demand the use of FTP over SSL? Note: As far as I know, the only change which occurred was the placement of the SSL certificate. Firewall settings, FTP client and server settings should all be the same as before, when everything was working.

    Read the article

  • Time Capsule + Ubee Router?

    - by Charlie
    I can't for the life of me figure this out. I recently had TWC installed in my house, and wanted to disable the NAT and router functions of it. I have a Time Capsule hooked up to it from LAN1 (on the Ubee) to the WAN port on the TC. The problems started occurring here. I figured the settings would be these: Ubee Configuration mode: Bridge DHCP: Off TC IPv4: 192.168.100.2 Subnet Mask: 255.255.255.0 Router Address: 192.168.100.1 DNS Servers: 8.8.8.8, 8.8.4.4 Router Mode: DHCP and NAT But using those settings, my TC says "Double NAT", so I have to change it all around to the default settings of the Ubee using NAT. This leads me to believe bridge mode doesn't actually turn off NAT...

    Read the article

  • Question about getting information off an old HD...

    - by user37983
    Ok so my old cpu was a sony vaio. its kinda old and had xp on it. My gf thrashed it - the cd drive went and she tried fixing it by messing with the configurations bios etc. the actual laptop is really fried the keyboard doesnt work, etc. However the harddrive is still intact. I tried putting the HD in my new cpu (toshiba runnig win7) and it looks like its gonna boot up it goes to the screen with the logo and the status bar starts to load. then it flashes a blue screen for a split second, and goes to the black screen where it says windows did not shut down properly and gives options to (start windows normally, safe mode, safe mode with networking, safe mode with prompts) ive tried every option but it always goes back to this screen. I need to get into the hd cuz i have very important files. is there anyway?

    Read the article

  • How can I read the verbose output from a Cmdlet in C# using Exchange Powershell

    - by mrkeith
    Environment: Exchange 2007 sp3 (2003 sp2 mixed mode) Visual Studio 2008, .Net 3.5 Hello, I'm working with an Exchange powershell move-mailbox cmdlet and have noted when I do so from the Exchange Management shell (using the Verbose switch) there is a ton of real-time information provided. To provide a little context, I'm attempting to create a UI application that moves mailboxes similarly to the Exchange Management Console but desire to support an input file and specific server/database destinations for each entry (and threading). Here's roughly what I have at present but I'm not sure if there is an event I need to register for or what... And to be clear, I desire to get this information in real-time so I may update my UI to reflect what's occurring in the move sequence for the appropriate user (pretty much like the native functionality offered in the Management Console). And in case you are wondering, the reason why I'm not content with the Management Console functionality is, I have an algorithm which I'm using to balance users depending on storage limit, Blackberry use, journaling, exception mailbox size etc which demands user be mapped to specific locations... and I do not desire to create many/several move groups for each common destination or to hunt for lists of users individually through the management console UI. I can not seem to find any good documentation or examples of how to tie into reading the verbose messages that are provided within the console using C# (I see value in being able to read this kind of information in many different scenarios). I've explored the Invoke and InvokeAsync methods and the StateChanged & DataReady events but none of these seem to provide the information (verbose comments) that I'm after. Any direction or examples that can be provided will be very appreciated! A code sample which is little more than how I would ordinarily call any other powershell command follows: // config to use ExMgmt shell, create runspace and open it RunspaceConfiguration rsConfig = RunspaceConfiguration.Create(); PSSnapInException snapInException = null; PSSnapInInfo info = rsConfig.AddPSSnapIn("Microsoft.Exchange.Management.PowerShell.Admin", out snapInException); if (snapInException != null) throw snapInException; Runspace runspace = RunspaceFactory.CreateRunspace(rsConfig); try { runspace.Open(); // create a pipeline and feed script text Pipeline pipeline = runspace.CreatePipeline(); string targetDatabase = @"myServer\myStorageGroup\myDB"; string mbxOwner = "[email protected]"; Command myMoveMailbox = new Command("Move-Mailbox", false, false); myMoveMailbox.Parameters.Add("Identity", mbxOwner); myMoveMailbox.Parameters.Add("TargetDatabase", targetDatabase); myMoveMailbox.Parameters.Add("Verbose"); myMoveMailbox.Parameters.Add("ValidateOnly"); myMoveMailbox.Parameters.Add("Confirm", false); pipeline.Commands.Add(myMoveMailbox); System.Collections.ObjectModel.Collection output = null; // these next few lines that are commented out are where I've tried // registering for events and calling asynchronously but this doesn't // seem to get me anywhere closer // //pipeline.StateChanged += new EventHandler(pipeline_StateChanged); //pipeline.Output.DataReady += new EventHandler(Output_DataReady); //pipeline.InvokeAsync(); //pipeline.Input.Close(); //return; tried these variations that are commented out but none seem to be useful output = pipeline.Invoke(); // Check for errors in the pipeline and throw an exception if necessary if (pipeline.Error != null && pipeline.Error.Count 0) { StringBuilder pipelineError = new StringBuilder(); pipelineError.AppendFormat("Error calling Test() Cmdlet. "); foreach (object item in pipeline.Error.ReadToEnd()) pipelineError.AppendFormat("{0}\n", item.ToString()); throw new Exception(pipelineError.ToString()); } foreach (PSObject psObject in output) { // blah, blah, blah // this is normally where I would read details about a particular PS command // but really pertains to a command once it finishes and has nothing to do with // the verbose messages that I'm after... since this part of the methods pertains // to the after-effects of a command having run, I'm suspecting I need to look to // the asynch invoke method but am not certain or knowing how. } } finally { runspace.Close(); } Thanks! Keith

    Read the article

  • SQL server deadlock between INSERT and SELECT statement

    - by dtroy
    Hi! I've got a problem with multiple deadlocks on SQL server 2005. This one is between an INSERT and a SELECT statement. There are two tables. Table 1 and Table2. Table2 has Table1's PK (table1_id) as foreign key. Index on table1_id is clustered. The INSERT inserts a single row into table2 at a time. The SELCET joins the 2 tables. (it's a long query which might take up to 12 secs to run) According to my understanding (and experiments) the INSERT should acquire an IS lock on table1 to check referential integrity (which should not cause a deadlock). But, in this case it acquired an IX page lock The deadlock report: <deadlock-list> <deadlock victim="process968898"> <process-list> <process id="process8db1f8" taskpriority="0" logused="2424" waitresource="OBJECT: 5:789577851:0 " waittime="12390" ownerId="61831512" transactionname="user_transaction" lasttranstarted="2010-04-16T07:10:13.347" XDES="0x222a8250" lockMode="IX" schedulerid="1" kpid="3764" status="suspended" spid="52" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-04-16T07:10:13.350" lastbatchcompleted="2010-04-16T07:10:13.347" clientapp=".Net SqlClient Data Provider" hostname="VIDEV01-B-ME" hostpid="3040" loginname="DatabaseName" isolationlevel="read uncommitted (1)" xactid="61831512" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="DatabaseName.dbo.prcTable2_Insert" line="18" stmtstart="576" stmtend="1148" sqlhandle="0x0300050079e62d06e9307f000b9d00000100000000000000"> INSERT INTO dbo.Table2 ( f1, table1_id, f2 ) VALUES ( @p1, @p_DocumentVersionID, @p1 ) </frame> </executionStack> <inputbuf> Proc [Database Id = 5 Object Id = 103671417] </inputbuf> </process> <process id="process968898" taskpriority="0" logused="0" waitresource="PAGE: 5:1:46510" waittime="7625" ownerId="61831406" transactionname="INSERT" lasttranstarted="2010-04-16T07:10:12.717" XDES="0x418ec00" lockMode="S" schedulerid="2" kpid="1724" status="suspended" spid="53" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-04-16T07:10:12.713" lastbatchcompleted="2010-04-16T07:10:12.713" clientapp=".Net SqlClient Data Provider" hostname="VIDEV01-B-ME" hostpid="3040" loginname="DatabaseName" isolationlevel="read committed (2)" xactid="61831406" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="DatabaseName.dbo.prcGetList" line="64" stmtstart="3548" stmtend="11570" sqlhandle="0x03000500dbcec17e8d267f000b9d00000100000000000000"> <!-- XXXXXXXXXXXXXX...SELECT STATEMENT WITH Multiple joins including both Table2 table 1 and .... XXXXXXXXXXXXXXX --> </frame> </executionStack> <inputbuf> Proc [Database Id = 5 Object Id = 2126630619] </inputbuf> </process> </process-list> <resource-list> <pagelock fileid="1" pageid="46510" dbid="5" objectname="DatabaseName.dbo.table1" id="lock6236bc0" mode="IX" associatedObjectId="72057594042908672"> <owner-list> <owner id="process8db1f8" mode="IX"/> </owner-list> <waiter-list> <waiter id="process968898" mode="S" requestType="wait"/> </waiter-list> </pagelock> <objectlock lockPartition="0" objid="789577851" subresource="FULL" dbid="5" objectname="DatabaseName.dbo.Table2" id="lock970a240" mode="S" associatedObjectId="789577851"> <owner-list> <owner id="process968898" mode="S"/> </owner-list> <waiter-list> <waiter id="process8db1f8" mode="IX" requestType="wait"/> </waiter-list> </objectlock> </resource-list> </deadlock> </deadlock-list> Can anyone explain why the INSERT gets the IX page lock ? Am I not reading the deadlock report properly? BTW, I have not managed to reproduce this issue. Thanks!

    Read the article

  • Building a JMX client in a servlet installed on the Deployment Manager

    - by Trevor
    Hi guys, I'm building a monitoring application as a servlet running on my websphere 7 ND deployment manager. The tool uses JMX to query the deployment manager for various data. Global Security is enabled on the dmgr. I'm having problems getting this to work however. My first attempt was to use the websphere client code: String sslProps = "file:" + base +"/properties/ssl.client.props"; System.setProperty("com.ibm.SSL.ConfigURL", sslProps); String soapProps = "file:" + base +"/properties/soap.client.props"; System.setProperty("com.ibm.SOAP.ConfigURL", pp); Properties connectProps = new Properties(); connectProps.setProperty(AdminClient.CONNECTOR_TYPE, AdminClient.CONNECTOR_TYPE_SOAP); connectProps.setProperty(AdminClient.CONNECTOR_HOST, dmgrHost); connectProps.setProperty(AdminClient.CONNECTOR_PORT, soapPort); connectProps.setProperty(AdminClient.CONNECTOR_SECURITY_ENABLED, "true"); AdminClient adminClient = AdminClientFactory.createAdminClient(connectProps) ; This results in the following exception: Caused by: com.ibm.websphere.management.exception.ConnectorNotAvailableException: ADMC0016E: The system cannot create a SOAP connector to connect to host ssunlab10.apaceng.net at port 13903. at com.ibm.ws.management.connector.soap.SOAPConnectorClient.getUrl(SOAPConnectorClient.java:1306) at com.ibm.ws.management.connector.soap.SOAPConnectorClient.access$300(SOAPConnectorClient.java:128) at com.ibm.ws.management.connector.soap.SOAPConnectorClient$4.run(SOAPConnectorClient.java:370) at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118) at com.ibm.ws.management.connector.soap.SOAPConnectorClient.reconnect(SOAPConnectorClient.java:363) ... 22 more Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:519) at java.net.Socket.connect(Socket.java:469) at java.net.Socket.<init>(Socket.java:366) at java.net.Socket.<init>(Socket.java:209) at com.ibm.ws.management.connector.soap.SOAPConnectorClient.getUrl(SOAPConnectorClient.java:1286) ... 26 more So, I then tried to do it via RMI, but adding in the sas.client.properties to the environment, and setting the connectort type in the code to CONNECTOR_TYPE_RMI. Now though I got a NameNotFoundException out of CORBA: Caused by: javax.naming.NameNotFoundException: Context: , name: JMXConnector: First component in name JMXConnector not found. [Root exception is org.omg.CosNaming.NamingContextPackage.NotFound: IDL:omg.org/CosNaming/NamingContext/NotFound:1.0] To see if it was an IBM issue, I tried using the standard JMX connector as well with the same result (substitute AdminClient for JMXConnector in the above error) * JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/JMXConnector"); Hashtable h = new Hashtable(); String providerUrl = "corbaloc:iiop:" + dmgrHost + ":" + rmiPort + "/WsnAdminNameService"; h.put(Context.PROVIDER_URL, providerUrl); // Specify the user ID and password for the server if security is enabled on server. String[] credentials = new String[] { "***", "***" }; h.put("jmx.remote.credentials", credentials); // Establish the JMX connection. JMXConnector jmxc = JMXConnectorFactory.connect(url, h); // Get the MBean server connection instance. mbsc = jmxc.getMBeanServerConnection(); At this point, in desperation I wrote a wsadmin sccript to run both the RMI and SOAP methods. To my amazement, this works fine. So my question is, why does the code not work in a servlet installed on the dmgr ? regards, Trevor

    Read the article

  • Delete Stored Proc Deadlock in Sql Server

    - by Chris
    I am having the following deadlock in SQL Server 2005 with a specific delete stored proc and I can't figure out what I need to do to remedy it. <deadlock-list> <deadlock victim="processf3a868"> <process-list> <process id="processcae718" taskpriority="0" logused="0" waitresource="KEY: 7:72057594340311040 (b50041b389fe)" waittime="62" ownerId="1678098" transactionguid="0x950057256032d14db6a2c553a39a8279" transactionname="user_transaction" lasttranstarted="2010-05-26T13:45:23.517" XDES="0x8306c370" lockMode="RangeS-U" schedulerid="1" kpid="2432" status="suspended" spid="59" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-05-26T13:45:23.717" lastbatchcompleted="2010-05-26T13:45:23.717" clientapp=".Net SqlClient Data Provider" hostname="DEVELOPER01" hostpid="28104" loginname="DEVELOPER01\ServiceUser" isolationlevel="serializable (4)" xactid="1678098" currentdb="7" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056"> <executionStack> <frame procname="DB.dbo.sp_DeleteSecuritiesRecords" line="13" stmtstart="708" stmtend="918" sqlhandle="0x030007008b6b662229b10c014f9d00000100000000000000"> DELETE FROM tSecuritiesRecords WHERE [FilingID] = @filingID AND [AccountID] = @accountID </frame> </executionStack> <inputbuf> Proc [Database Id = 7 Object Id = 577137547] </inputbuf> </process> <process id="processf3a868" taskpriority="0" logused="0" waitresource="KEY: 7:72057594340311040 (4f00409af90f)" waittime="93" ownerId="1678019" transactionguid="0xb716547a8f7fdd40b342e5db6b3699fb" transactionname="user_transaction" lasttranstarted="2010-05-26T13:45:21.543" XDES="0x92617130" lockMode="X" schedulerid="3" kpid="13108" status="suspended" spid="57" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-05-26T13:45:23.717" lastbatchcompleted="2010-05-26T13:45:23.717" clientapp=".Net SqlClient Data Provider" hostname="DEVELOPER01" hostpid="28104" loginname="DEVELOPER01\ServiceUser" isolationlevel="serializable (4)" xactid="1678019" currentdb="7" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056"> <executionStack> <frame procname="DB.dbo.sp_DeleteSecuritiesRecords" line="13" stmtstart="708" stmtend="918" sqlhandle="0x030007008b6b662229b10c014f9d00000100000000000000"> DELETE FROM tSecuritiesRecords WHERE [FilingID] = @filingID AND [AccountID] = @accountID </frame> </executionStack> <inputbuf> Proc [Database Id = 7 Object Id = 577137547] </inputbuf> </process> </process-list> <resource-list> <keylock hobtid="72057594340311040" dbid="7" objectname="DB.dbo.tSecuritiesRecords" indexname="PK_tTransactions" id="lock82416380" mode="RangeS-U" associatedObjectId="72057594340311040"> <owner-list> <owner id="processf3a868" mode="RangeS-U"/> </owner-list> <waiter-list> <waiter id="processcae718" mode="RangeS-U" requestType="convert"/> </waiter-list> </keylock> <keylock hobtid="72057594340311040" dbid="7" objectname="DB.dbo.tSecuritiesRecords" indexname="PK_tTransactions" id="lock825fd380" mode="RangeS-U" associatedObjectId="72057594340311040"> <owner-list> <owner id="processcae718" mode="RangeS-S"/> </owner-list> <waiter-list> <waiter id="processf3a868" mode="X" requestType="convert"/> </waiter-list> </keylock> </resource-list> </deadlock> </deadlock-list>

    Read the article

  • Getting SQL Syntax Error in INSERT INTO statement in Access 2010

    - by hello123
    I have written the following Insert Into statement in Access 2010 VBA: Private Sub AddBPSSButton_Click() ' CurrentDb.Execute "INSERT INTO TabClearDetail(C_Site) VALUES(" & Me.C_Site & ")" Dim strSQL As String 'MsgBox Me.[Clearance Applying For] 'MsgBox Me.[Contract Applying for] 'MsgBox Me.[C_Site] 'MsgBox Me.[C_SponsorSurname] 'MsgBox Me.[C_SponsorForename] 'MsgBox Me.[C_SponsorContactDetails] 'MsgBox Me.[C_EmploymentDetail] 'MsgBox Me.[C_SGNumber] 'MsgBox Me.[C_REF1DateRecd] 'MsgBox Me.[C_REF2DateRecd] 'MsgBox Me.[C_IDDateRecd] 'MsgBox Me.[C_IDNum] 'MsgBox Me.[C_CriminalDeclarationDate] 'MsgBox Me.[Credit Check Consent] 'MsgBox Me.[C_CreditCheckDate] 'MsgBox Me.[Referred for Management Decision] 'MsgBox Me.[Management Decision Date] 'MsgBox Me.[C_Comment] 'MsgBox Me.[C_DateCleared] 'MsgBox Me.[C_ClearanceLevel] 'MsgBox Me.[C_ContractAssigned] 'MsgBox Me.[C_ExpiryDate] 'MsgBox Me.[C_LinKRef] 'MsgBox Me.[C_OfficialSecretsDate] strSQL = "INSERT INTO TabClearDetail(Clearance Applying For, Contract Applying for, " & _ "C_Site, C_SponsorSurname, C_SponsorForename, C_SponsorContactDetails, C_EmploymentDetail, " & _ "C_SGNumber, C_REF1DateRecd, C_RED2DateRecd, C_IDDateRecd, C_IDNum, " & _ "C_CriminalDeclarationDate, Credit Check Consent, C_CreditCheckDate, Referred for Management Decision, " & _ "Management Decision Date, C_Comment, C_DateCleared, C_ClearanceLevel, C_ContractAssigned, " & _ "C_ExpiryDate, C_LinkRef, C_OfficialSecretsDate) VALUES('" & Me.[Clearance Applying For] & "', " & _ "'" & Me.[Contract Applying for] & "', '" & Me.[C_Site] & "', '" & Me.[C_SponsorSurname] & "', " & _ "'" & Me.[C_SponsorForename] & "', '" & Me.[C_SponsorContactDetails] & "', " & _ "'" & Me.[C_EmploymentDetail] & "', '" & Me.[C_SGNumber] & "', '" & Me.[C_REF1DateRecd] & "', " & _ "'" & Me.[C_REF2DateRecd] & "', '" & Me.[C_IDDateRecd] & "', '" & Me.[C_IDNum] & "', " & _ "'" & Me.[C_CriminalDeclarationDate] & "', '" & Me.[Credit Check Consent] & "', '" & Me.[C_CreditCheckDate] & "', " & _ "'" & Me.[Referred for Management Decision] & "', '" & Me.[Management Decision Date] & "', " & _ "'" & Me.[C_Comment] & "', '" & Me.[C_DateCleared] & "', '" & Me.[C_ClearanceLevel] & "', " & _ "'" & Me.[C_ContractAssigned] & "', '" & Me.[C_ExpiryDate] & "', '" & Me.[C_LinKRef] & "', " & _ "'" & Me.[C_OfficialSecretsDate] & "');" DoCmd.RunSQL (strSQL) 'MsgBox strSQL End Sub All The MsgBox calls work, so I believe I have typed all column names and text box names correctly. I am getting a Syntax error when I get to the DoCmd.RunSQL line. Have been staring at this for quite a while trying to see if I have missed a comma or speech mark or something, but am hoping maybe another set of eyes will see my mistake. Any help will be greatly appreciated. Thanks!

    Read the article

  • July 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m super excited to announce the July 2013 release of the Ajax Control Toolkit. You can download the new version of the Ajax Control Toolkit from CodePlex (http://ajaxControlToolkit.CodePlex.com) or install the Ajax Control Toolkit from NuGet: With this release, we have completely rewritten the way the Ajax Control Toolkit combines, minifies, gzips, and caches JavaScript files. The goal of this release was to improve the performance of the Ajax Control Toolkit and make it easier to create custom Ajax Control Toolkit controls. Improving Ajax Control Toolkit Performance Previous releases of the Ajax Control Toolkit optimized performance for a single page but not multiple pages. When you visited each page in an app, the Ajax Control Toolkit would combine all of the JavaScript files required by the controls in the page into a new JavaScript file. So, even if every page in your app used the exact same controls, visitors would need to download a new combined Ajax Control Toolkit JavaScript file for each page visited. Downloading new scripts for each page that you visit does not lead to good performance. In general, you want to make as few requests for JavaScript files as possible and take maximum advantage of caching. For most apps, you would get much better performance if you could specify all of the Ajax Control Toolkit controls that you need for your entire app and create a single JavaScript file which could be used across your entire app. What a great idea! Introducing Control Bundles With this release of the Ajax Control Toolkit, we introduce the concept of Control Bundles. You define a Control Bundle to indicate the set of Ajax Control Toolkit controls that you want to use in your app. You define Control Bundles in a file located in the root of your application named AjaxControlToolkit.config. For example, the following AjaxControlToolkit.config file defines two Control Bundles: <ajaxControlToolkit> <controlBundles> <controlBundle> <control name="CalendarExtender" /> <control name="ComboBox" /> </controlBundle> <controlBundle name="CalendarBundle"> <control name="CalendarExtender"></control> </controlBundle> </controlBundles> </ajaxControlToolkit> The first Control Bundle in the file above does not have a name. When a Control Bundle does not have a name then it becomes the default Control Bundle for your entire application. The default Control Bundle is used by the ToolkitScriptManager by default. For example, the default Control Bundle is used when you declare the ToolkitScriptManager like this:  <ajaxToolkit:ToolkitScriptManager runat=”server” /> The default Control Bundle defined in the file above includes all of the scripts required for the CalendarExtender and ComboBox controls. All of the scripts required for both of these controls are combined, minified, gzipped, and cached automatically. The AjaxControlToolkit.config file above also defines a second Control Bundle with the name CalendarBundle. Here’s how you would use the CalendarBundle with the ToolkitScriptManager: <ajaxToolkit:ToolkitScriptManager runat="server"> <ControlBundles> <ajaxToolkit:ControlBundle Name="CalendarBundle" /> </ControlBundles> </ajaxToolkit:ToolkitScriptManager> In this case, only the JavaScript files required by the CalendarExtender control, and not the ComboBox, would be downloaded because the CalendarBundle lists only the CalendarExtender control. You can use multiple named control bundles with the ToolkitScriptManager and you will get all of the scripts from both bundles. Support for ControlBundles is a new feature of the ToolkitScriptManager that we introduced with this release. We extended the ToolkitScriptManager to support the Control Bundles that you can define in the AjaxControlToolkit.config file. Let me be explicit about the rules for Control Bundles: 1. If you do not create an AjaxControlToolkit.config file then the ToolkitScriptManager will download all of the JavaScript files required for all of the controls in the Ajax Control Toolkit. This is the easy but low performance option. 2. If you create an AjaxControlToolkit.config file and create a ControlBundle without a name then the ToolkitScriptManager uses that Control Bundle by default. For example, if you plan to use only the CalendarExtender and ComboBox controls in your application then you should create a default bundle that lists only these two controls. 3. If you create an AjaxControlToolkit.config file and create one or more named Control Bundles then you can use these named Control Bundles with the ToolkitScriptManager. For example, you might want to use different subsets of the Ajax Control Toolkit controls in different sections of your app. I should also mention that you can use the AjaxControlToolkit.config file with custom Ajax Control Toolkit controls – new controls that you write. For example, here is how you would register a set of custom controls from an assembly named MyAssembly: <ajaxControlToolkit> <controlBundles> <controlBundle name="CustomBundle"> <control name="MyAssembly.MyControl1" assembly="MyAssembly" /> <control name="MyAssembly.MyControl2" assembly="MyAssembly" /> </controlBundle> </ajaxControlToolkit> What about ASP.NET Bundling and Minification? The idea of Control Bundles is similar to the idea of Script Bundles used in ASP.NET Bundling and Minification. You might be wondering why we didn’t simply use Script Bundles with the Ajax Control Toolkit. There were several reasons. First, ASP.NET Bundling does not work with scripts embedded in an assembly. Because all of the scripts used by the Ajax Control Toolkit are embedded in the AjaxControlToolkit.dll assembly, ASP.NET Bundling was not an option. Second, Web Forms developers typically think at the level of controls and not at the level of individual scripts. We believe that it makes more sense for a Web Forms developer to specify the controls that they need in an app (CalendarExtender, ToggleButton) instead of the individual scripts that they need in an app (the 15 or so scripts required by the CalenderExtender). Finally, ASP.NET Bundling does not work with older versions of ASP.NET. The Ajax Control Toolkit needs to support ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Therefore, using ASP.NET Bundling was not an option. There is nothing wrong with using Control Bundles and Script Bundles side-by-side. The ASP.NET 4.0 and 4.5 ToolkitScriptManager supports both approaches to bundling scripts. Using the AjaxControlToolkit.CombineScriptsHandler Browsers cache JavaScript files by URL. For example, if you request the exact same JavaScript file from two different URLs then the exact same JavaScript file must be downloaded twice. However, if you request the same JavaScript file from the same URL more than once then it only needs to be downloaded once. With this release of the Ajax Control Toolkit, we have introduced a new HTTP Handler named the AjaxControlToolkit.CombineScriptsHandler. If you register this handler in your web.config file then the Ajax Control Toolkit can cache your JavaScript files for up to one year in the future automatically. You should register the handler in two places in your web.config file: in the <httpHandlers> section and the <system.webServer> section (don’t forget to register the handler for the AjaxFileUpload while you are there!). <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </httpHandlers> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add name="CombineScriptsHandler" verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </handlers> <system.webServer> The handler is only used in release mode and not in debug mode. You can enable release mode in your web.config file like this: <compilation debug=”false”> You also can override the web.config setting with the ToolkitScriptManager like this: <act:ToolkitScriptManager ScriptMode=”Release” runat=”server”/> In release mode, scripts are combined, minified, gzipped, and cached with a far future cache header automatically. When the handler is not registered, scripts are requested from the page that contains the ToolkitScriptManager: When the handler is registered in the web.config file, scripts are requested from the handler: If you want the best performance, always register the handler. That way, the Ajax Control Toolkit can cache the bundled scripts across page requests with a far future cache header. If you don’t register the handler then a new JavaScript file must be downloaded whenever you travel to a new page. Dynamic Bundling and Minification Previous releases of the Ajax Control Toolkit used a Visual Studio build task to minify the JavaScript files used by the Ajax Control Toolkit controls. The disadvantage of this approach to minification is that it made it difficult to create custom Ajax Control Toolkit controls. Starting with this release of the Ajax Control Toolkit, we support dynamic minification. The JavaScript files in the Ajax Control Toolkit are minified at runtime instead of at build time. Scripts are minified only when in release mode. You can specify release mode with the web.config file or with the ToolkitScriptManager ScriptMode property. Because of this change, the Ajax Control Toolkit now depends on the Ajax Minifier. You must include a reference to AjaxMin.dll in your Visual Studio project or you cannot take advantage of runtime minification. If you install the Ajax Control Toolkit from NuGet then AjaxMin.dll is added to your project as a NuGet dependency automatically. If you download the Ajax Control Toolkit from CodePlex then the AjaxMin.dll is included in the download. This change means that you no longer need to do anything special to create a custom Ajax Control Toolkit. As an open source project, we hope more people will contribute to the Ajax Control Toolkit (Yes, I am looking at you.) We have been working hard on making it much easier to create new custom controls. More on this subject with the next release of the Ajax Control Toolkit. A Single Visual Studio Solution We also made substantial changes to the Visual Studio solution and projects used by the Ajax Control Toolkit with this release. This change will matter to you only if you need to work directly with the Ajax Control Toolkit source code. In previous releases of the Ajax Control Toolkit, we maintained separate solution and project files for ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Starting with this release, we now support a single Visual Studio 2012 solution that takes advantage of multi-targeting to build ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5 versions of the toolkit. This change means that you need Visual Studio 2012 to open the Ajax Control Toolkit project downloaded from CodePlex. For details on how we setup multi-targeting, please see Budi Adiono’s blog post: http://www.budiadiono.com/2013/07/25/visual-studio-2012-multi-targeting-framework-project/ Summary You can take advantage of this release of the Ajax Control Toolkit to significantly improve the performance of your website. You need to do two things: 1) You need to create an AjaxControlToolkit.config file which lists the controls used in your app and 2) You need to register the AjaxControlToolkit.CombineScriptsHandler in the web.config file. We made substantial changes to the Ajax Control Toolkit with this release. We think these changes will result in much better performance for multipage apps and make the process of building custom controls much easier. As always, we look forward to hearing your feedback.

    Read the article

  • How To Restore Firefox Options To Default Without Uninstalling

    - by Gopinath
    Firefox plugins are awesome and they are the pillars for the huge success of Firefox browser. Plugins vary from simple ones like changing color scheme of the browser to powerful ones likes changing the behavior of the browser itself. Recently I installed one of the powerful Firefox plugins and played around to tweak the behavior of the browser. At the end of my half an hour play, Firefox has completely become useless and stopped rending web pages properly. To continue using Firefox I had to restore it to default settings. But I don’t like to uninstall and then install it again as it’s a time consuming process and also I’ll loose all the plugins I’m using. How did I restore the default settings in a single click? Default Settings Restore Through Safe Mode Options It’s very easy to restore default settings of Firefox with the safe mode options. All we need to do is 1.  Close all the Firefox browser windows that are open 2. Launch Firefox in safe mode 3. Choose the option Reset all user preferences to Firefox defaults 4. Click on Make Changes and Restart button. Note: When Firefox restore the default settings, it erases all the stored passwords, browser history and other settings you have done. That’s all. This excellent feature of Firefox saved me from great pain and hope it’s going to help you too. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • How To Restore Firefox Options To Default Without Uninstalling

    - by Gopinath
    Firefox plugins are awesome and they are the pillars for the huge success of Firefox browser. Plugins vary from simple ones like changing color scheme of the browser to powerful ones likes changing the behavior of the browser itself. Recently I installed one of the powerful Firefox plugins and played around to tweak the behavior of the browser. At the end of my half an hour play, Firefox has completely become useless and stopped rending web pages properly. To continue using Firefox I had to restore it to default settings. But I don’t like to uninstall and then install it again as it’s a time consuming process and also I’ll loose all the plugins I’m using. How did I restore the default settings in a single click? Default Settings Restore Through Safe Mode Options It’s very easy to restore default settings of Firefox with the safe mode options. All we need to do is 1.  Close all the Firefox browser windows that are open 2. Launch Firefox in safe mode 3. Choose the option Reset all user preferences to Firefox defaults 4. Click on Make Changes and Restart button. Note: When Firefox restore the default settings, it erases all the stored passwords, browser history and other settings you have done. That’s all. This excellent feature of Firefox saved me from great pain and hope it’s going to help you too. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Oracle UCM Integration with WebCenter

    - by john.brunswick
    Portal deployments always contain some level of content that requires management. Like peanut butter and jelly, the ying and yang, they are inseparable. Unfortunately, unlike peanut butter and jelly content and portals usually require that an extensive amount of work be completed to create a seamless experience for end users who will be serviced by the portal, as well as for users who will be contributing and managing the content. With WebCenter Suite Oracle has understood this need and addressed it by including Universal Content Management (UCM, formerly Stellent) licensing to allow content to be delivered into the portal from a mature, class-leading content management technology. To unlock the most value from this content technology, WebCenter portal technology can leverage a series of integration strategies available through its open standards support, as well as a series of native components to enable content consumption from UCM. This have been done to enable IT teams to reduce solution deployment time and provide quick wins to their business stakeholders. The ongoing cost of ownership for the solution is also greatly reduced through these various integrations. Within this post we will explore various ways in which the content can be Contributed through out of the box interfaces Displayed natively within the portal (configuration) Exposed programmatically (development) The information below showcases how to quickly take advantage of WebCenter's marriage of content and portal technologies, then leverage various programmatic integrations available with UCM.

    Read the article

  • New Fusion Community, Community Name Changes and Upcoming Webcasts

    - by cwarticki
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Check out the new MOS Customer Relationship Management (CRM) community. This community has been featured in marketing events and is one of the more active communities so far. Support has also renamed the Fusion HCM community (now Human Capital Management (HCM)) and the Technical – FA community (now Fusion Applications Technology) in order to standardize our naming convention. Finally, we have two upcoming webcasts: 18-OCT-2012 : Fusion Apps Security - User & Role Management using Oracle Identity Manager featured in our Fusion Applications Technology community 01-NOV-2012: Fusion Apps Security – Troubleshoot Data Role Issues featured in our Fusion Applications Technology community. Check out our new Community. Attend our upcoming webcasts. Participate.  Engage. Contribute. ~Chris

    Read the article

  • SQLAuthority News – Author Visit to Nepal TechMela – 2 Technical Sessions

    - by pinaldave
    Microsoft MDP Nepal is going to organize a Tech Mela for the IT community of Nepal on March 29 & 30, 2010 (2066 Chaitra 16 & 17), Monday and Tuesday,  at the Russian Center for Science & Culture, Kamalpokhari, Kathmandu. The objective of the event is to enhance and exchange knowledge about Information Technology, as well as Microsoft products and technologies, with the IT community. I am very excited to attend this one-of-a-kind event in Nepal. I will be giving two presentations in the said event, which includes: 1) Become An Efficient Developer – Learn The Tricks of SQL Server Management Studio (SSMS) There are so many features in SQL Server Management Studio that we may not know or use all of them. This presentation will impart tricks and tips of SQL Server Management Studio to the event goers. This aims to make you an efficient developer by having an edge over the other developers. 2) Good, Bad and Ugly: The Story of Index Index is often considered as the sure shot tool of improving the performance of any query. Learn the basics with examples and discover the good, bad and ugly sides of Index. This session will help you efficiently write queries in future. I am very excited to attend this special event as this is the very first time I will be presenting in technical sessions in Nepal. If you are in Nepal, I strongly suggest that you go to this once-in a lifetime IT fair. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • How to Reuse Your Old Wi-Fi Router as a Network Switch

    - by Jason Fitzpatrick
    Just because your old Wi-Fi router has been replaced by a newer model doesn’t mean it needs to gather dust in the closet. Read on as we show you how to take an old and underpowered Wi-Fi router and turn it into a respectable network switch (saving your $20 in the process). Image by mmgallan. Why Do I Want To Do This? Wi-Fi technology has changed significantly in the last ten years but Ethernet-based networking has changed very little. As such, a Wi-Fi router with 2006-era guts is lagging significantly behind current Wi-Fi router technology, but the Ethernet networking component of the device is just as useful as ever; aside from potentially being only 100Mbs instead of 1000Mbs capable (which for 99% of home applications is irrelevant) Ethernet is Ethernet. What does this matter to you, the consumer? It means that even though your old router doesn’t hack it for your Wi-Fi needs any longer the device is still a perfectly serviceable (and high quality) network switch. When do you need a network switch? Any time you want to share an Ethernet cable among multiple devices, you need a switch. For example, let’s say you have a single Ethernet wall jack behind your entertainment center. Unfortunately you have four devices that you want to link to your local network via hardline including your smart HDTV, DVR, Xbox, and a little Raspberry Pi running XBMC. Instead of spending $20-30 to purchase a brand new switch of comparable build quality to your old Wi-Fi router it makes financial sense (and is environmentally friendly) to invest five minutes of your time tweaking the settings on the old router to turn it from a Wi-Fi access point and routing tool into a network switch–perfect for dropping behind your entertainment center so that your DVR, Xbox, and media center computer can all share an Ethernet connection. What Do I Need? For this tutorial you’ll need a few things, all of which you likely have readily on hand or are free for download. To follow the basic portion of the tutorial, you’ll need the following: 1 Wi-Fi router with Ethernet ports 1 Computer with Ethernet jack 1 Ethernet cable For the advanced tutorial you’ll need all of those things, plus: 1 copy of DD-WRT firmware for your Wi-Fi router We’re conducting the experiment with a Linksys WRT54GL Wi-Fi router. The WRT54 series is one of the best selling Wi-Fi router series of all time and there’s a good chance a significant number of readers have one (or more) of them stuffed in an office closet. Even if you don’t have one of the WRT54 series routers, however, the principles we’re outlining here apply to all Wi-Fi routers; as long as your router administration panel allows the necessary changes you can follow right along with us. A quick note on the difference between the basic and advanced versions of this tutorial before we proceed. Your typical Wi-Fi router has 5 Ethernet ports on the back: 1 labeled “Internet”, “WAN”, or a variation thereof and intended to be connected to your DSL/Cable modem, and 4 labeled 1-4 intended to connect Ethernet devices like computers, printers, and game consoles directly to the Wi-Fi router. When you convert a Wi-Fi router to a switch, in most situations, you’ll lose two port as the “Internet” port cannot be used as a normal switch port and one of the switch ports becomes the input port for the Ethernet cable linking the switch to the main network. This means, referencing the diagram above, you’d lose the WAN port and LAN port 1, but retain LAN ports 2, 3, and 4 for use. If you only need to switch for 2-3 devices this may be satisfactory. However, for those of you that would prefer a more traditional switch setup where there is a dedicated WAN port and the rest of the ports are accessible, you’ll need to flash a third-party router firmware like the powerful DD-WRT onto your device. Doing so opens up the router to a greater degree of modification and allows you to assign the previously reserved WAN port to the switch, thus opening up LAN ports 1-4. Even if you don’t intend to use that extra port, DD-WRT offers you so many more options that it’s worth the extra few steps. Preparing Your Router for Life as a Switch Before we jump right in to shutting down the Wi-Fi functionality and repurposing your device as a network switch, there are a few important prep steps to attend to. First, you want to reset the router (if you just flashed a new firmware to your router, skip this step). Following the reset procedures for your particular router or go with what is known as the “Peacock Method” wherein you hold down the reset button for thirty seconds, unplug the router and wait (while still holding the reset button) for thirty seconds, and then plug it in while, again, continuing to hold down the rest button. Over the life of a router there are a variety of changes made, big and small, so it’s best to wipe them all back to the factory default before repurposing the router as a switch. Second, after resetting, we need to change the IP address of the device on the local network to an address which does not directly conflict with the new router. The typical default IP address for a home router is 192.168.1.1; if you ever need to get back into the administration panel of the router-turned-switch to check on things or make changes it will be a real hassle if the IP address of the device conflicts with the new home router. The simplest way to deal with this is to assign an address close to the actual router address but outside the range of addresses that your router will assign via the DHCP client; a good pick then is 192.168.1.2. Once the router is reset (or re-flashed) and has been assigned a new IP address, it’s time to configure it as a switch. Basic Router to Switch Configuration If you don’t want to (or need to) flash new firmware onto your device to open up that extra port, this is the section of the tutorial for you: we’ll cover how to take a stock router, our previously mentioned WRT54 series Linksys, and convert it to a switch. Hook the Wi-Fi router up to the network via one of the LAN ports (consider the WAN port as good as dead from this point forward, unless you start using the router in its traditional function again or later flash a more advanced firmware to the device, the port is officially retired at this point). Open the administration control panel via  web browser on a connected computer. Before we get started two things: first,  anything we don’t explicitly instruct you to change should be left in the default factory-reset setting as you find it, and two, change the settings in the order we list them as some settings can’t be changed after certain features are disabled. To start, let’s navigate to Setup ->Basic Setup. Here you need to change the following things: Local IP Address: [different than the primary router, e.g. 192.168.1.2] Subnet Mask: [same as the primary router, e.g. 255.255.255.0] DHCP Server: Disable Save with the “Save Settings” button and then navigate to Setup -> Advanced Routing: Operating Mode: Router This particular setting is very counterintuitive. The “Operating Mode” toggle tells the device whether or not it should enable the Network Address Translation (NAT)  feature. Because we’re turning a smart piece of networking hardware into a relatively dumb one, we don’t need this feature so we switch from Gateway mode (NAT on) to Router mode (NAT off). Our next stop is Wireless -> Basic Wireless Settings: Wireless SSID Broadcast: Disable Wireless Network Mode: Disabled After disabling the wireless we’re going to, again, do something counterintuitive. Navigate to Wireless -> Wireless Security and set the following parameters: Security Mode: WPA2 Personal WPA Algorithms: TKIP+AES WPA Shared Key: [select some random string of letters, numbers, and symbols like JF#d$di!Hdgio890] Now you may be asking yourself, why on Earth are we setting a rather secure Wi-Fi configuration on a Wi-Fi router we’re not going to use as a Wi-Fi node? On the off chance that something strange happens after, say, a power outage when your router-turned-switch cycles on and off a bunch of times and the Wi-Fi functionality is activated we don’t want to be running the Wi-Fi node wide open and granting unfettered access to your network. While the chances of this are next-to-nonexistent, it takes only a few seconds to apply the security measure so there’s little reason not to. Save your changes and navigate to Security ->Firewall. Uncheck everything but Filter Multicast Firewall Protect: Disable At this point you can save your changes again, review the changes you’ve made to ensure they all stuck, and then deploy your “new” switch wherever it is needed. Advanced Router to Switch Configuration For the advanced configuration, you’ll need a copy of DD-WRT installed on your router. Although doing so is an extra few steps, it gives you a lot more control over the process and liberates an extra port on the device. Hook the Wi-Fi router up to the network via one of the LAN ports (later you can switch the cable to the WAN port). Open the administration control panel via web browser on the connected computer. Navigate to the Setup -> Basic Setup tab to get started. In the Basic Setup tab, ensure the following settings are adjusted. The setting changes are not optional and are required to turn the Wi-Fi router into a switch. WAN Connection Type: Disabled Local IP Address: [different than the primary router, e.g. 192.168.1.2] Subnet Mask: [same as the primary router, e.g. 255.255.255.0] DHCP Server: Disable In addition to disabling the DHCP server, also uncheck all the DNSMasq boxes as the bottom of the DHCP sub-menu. If you want to activate the extra port (and why wouldn’t you), in the WAN port section: Assign WAN Port to Switch [X] At this point the router has become a switch and you have access to the WAN port so the LAN ports are all free. Since we’re already in the control panel, however, we might as well flip a few optional toggles that further lock down the switch and prevent something odd from happening. The optional settings are arranged via the menu you find them in. Remember to save your settings with the save button before moving onto a new tab. While still in the Setup -> Basic Setup menu, change the following: Gateway/Local DNS : [IP address of primary router, e.g. 192.168.1.1] NTP Client : Disable The next step is to turn off the radio completely (which not only kills the Wi-Fi but actually powers the physical radio chip off). Navigate to Wireless -> Advanced Settings -> Radio Time Restrictions: Radio Scheduling: Enable Select “Always Off” There’s no need to create a potential security problem by leaving the Wi-Fi radio on, the above toggle turns it completely off. Under Services -> Services: DNSMasq : Disable ttraff Daemon : Disable Under the Security -> Firewall tab, uncheck every box except “Filter Multicast”, as seen in the screenshot above, and then disable SPI Firewall. Once you’re done here save and move on to the Administration tab. Under Administration -> Management:  Info Site Password Protection : Enable Info Site MAC Masking : Disable CRON : Disable 802.1x : Disable Routing : Disable After this final round of tweaks, save and then apply your settings. Your router has now been, strategically, dumbed down enough to plod along as a very dependable little switch. Time to stuff it behind your desk or entertainment center and streamline your cabling.     

    Read the article

  • Convite: Manageability Partner Community

    - by pfolgado
    Oracle PartnerNetwork | Account | Feedback WELCOME TO THE NEW ORACLE EMEA MANAGEABILITY PARTNER COMMUNITY Dear partner You are receiving this message because you are a registered member of the Oracle Applications & Systems Management Partner Community in EMEA. With occasion of the announcement of Oracle Enterprise Manager 12c we are revitalizing and rebranding our EMEA Applications & Systems Management Partner Community. To do this we have improved the community platform, for better and increased collaboration: The EMEA Applications & Systems Management Partner Community is now renamed to "Manageability Partner Community EMEA" We have created a Manageability Community blog and a Collaboration Workspace: The EMEA Manageability Partner Community blog is a public blog and we use it to provide quick and easy communication to the community members. (Please bookmark or subscribe to the RSS feeds). The EMEA Manageability Partner Community Collaborative Workspace is a restricted area that only community members can access. It contains materials from community events, sales kits, implementation experiences, reserved for community members. It also allows for partners to share content and collaborate with other community members. As a registered member of the community you have already been granted access to this restricted area. A dedicated team that manages the EMEA Manageability on a continuous basis. What do you have to do? All you have to do now is to bookmark the EMEA Manageability Partner Community blog page or subscribe to the blog's RSS feeds and use this as your central point of contact for Manageability information from Oracle. I look forward to develop a strong community in the Manageability area, where Oracle Manageability partners can share experiences and mutually benefit. Best regards, Javier Puerta Director Core Technology Partner Programs Alliances & Channels EMEA Phone: +34 916 312 41 Mobile: +34 609 062 373 Patrick Rood EMEA Partner Programs for Manageability Oracle EMEA Technology Phone: +31 306 627 969 Mobile: +31 611 954 277 Copyright © 2011, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy

    Read the article

  • system overrides my mount parameters in /etc/fstab

    - by valya
    [.../~]$ mount /dev/sda4 on / type ext4 (rw,commit=60,commit=0) [.../~]$ cat /etc/fstab # UNCONFIGURED FSTAB FOR BASE SYSTEM UUID=70739c04-fcb6-4747-803c-824f9c894f41 / ext4 defaults,commit=60 0 1 What can I do about it? It seems strange. I want to be able to set any commit time I want Edit: added /proc/mounts contents [.../~]$ cat /proc/mounts rootfs / rootfs rw 0 0 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0 none /dev devtmpfs rw,relatime,size=886332k,nr_inodes=221583,mode=755 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 /dev/disk/by-uuid/70739c04-fcb6-4747-803c-824f9c894f41 / ext4 rw,relatime,barrier=1,data=ordered 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0 none /var/run tmpfs rw,nosuid,relatime,mode=755 0 0 none /var/lock tmpfs rw,nosuid,nodev,noexec,relatime 0 0 /dev/sda3 /media/megahard fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 cgroup /dev/cgroup/cpu cgroup rw,relatime,cpu,release_agent=/usr/local/sbin/cgroup_clean 0 0 gvfs-fuse-daemon /home/va1en0k/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0

    Read the article

  • Oracle annuncia la nuova release di Oracle Hyperion EPM System

    - by Stefano Oddone
    Lo scorso 4 Aprile, durante l'Oracle Open World tenutosi a Tokyo, Mark Hurd, Presidente di Oracle, ha annunciato l'imminente rilascio della release 11.1.2.2 di Oracle Hyperion Enterprise Performance Managent System, la piattaforma leader nel mercato mondiale dell'EPM. La nuova release introduce un insieme estremamente significativo di nuovi moduli, migliorie a moduli esistenti, evoluzioni tecnologiche e funzionali che incrementano ulteriormente il valore ed il vantaggio competitivo fornito dall'offerta Oracle. Tra le principali novità in evidenza: introduzione del nuovo modulo Oracle Hyperion Project Financial Planning, verticalizzazione per la pianificazione economico-finanziaria, il funding ed il budgeting di progetti, iniziative, attività, commesse arricchimento di Oracle Hyperion Planning con funzionalità built-in a supporto del Predictive Planning e del Rolling Forecast per supportare processi di budgeting e forecasting sempre più flessibili, frequenti ed efficaci introduzione del nuovo modulo Oracle Account Reconciliation Manager per la gestione dell'intero ciclo di vita delle attività di riconciliazione dei conti tra General Ledger e Sub-Ledger o tra sistemi contabili differenti arricchimento di Oracle Hyperion Financial Management con un'interfaccia web totalmente nuova e l'introduzione della Smart Dimensionality, ovvero la possibilità di definire modelli con più delle 12 dimensioni "canoniche" tipiche delle releases precedenti, con una gestione ottimizzata di query e calcoli in funzione della cardinalità delle dimensioni in gioco arricchimento di Oracle Hyperion Profitability & Cost Management con funzionalità di Detailed Profitability, ovvero la possibilità di implementare modelli di costing e profittabilità in presenza di dimensioni ad altissima cardinalità quali, ad esempio, gli SKU delle industrie Retail e Distribution, i clienti delle Banche Retail e delle Telco, le singole utente delle Utilities. arricchimento di Oracle Hyperion Financial Data Quality Management, in particolare della componente ERP Integrator, con estensione delle integrazioni pre-built verso SAP Financials e JD Edwards Enterprise One Financials introduzione di Oracle Exalytics, il primo engineered system specificatamente progettato per l'In-Memory Analytics che permette di ottenere performance di calcolo e di analisi senza precedenti al crescere dei volumi di dati, delle dimensioni dei modelli e della concorrenza degli utenti, supportando così processi di Business Intelligence, Planning & Budgeting, Cost Allocation sempre più articolati e distribuiti Il prossimo 19 Aprile nella sede Oracle di Cinisello Balsamo (MI) si terrà un evento dove verranno presentate in dettaglio le novità introdotte dalla nuova release dell'EPM System; l'evento sarà replicato il 3 Maggio nella sede Oracle di Roma. L'evento è pubblico e gratuito, chi fosse interessato può registrarsi qui. Per ulteriori informazioni potete fare riferimento alla Press Release Ufficiale Qui potete rivedere l'intervento di Mark Hurd all'Open World sulla Strategia Oracle per il Business Analytics

    Read the article

  • Reminder: True WCF Asynchronous Operation

    - by Sean Feldman
    A true asynchronous service operation is not the one that returns void, but the one that is marked as IsOneWay=true using BeginX/EndX asynchronous operations (thanks Krzysztof). To support this sort of fire-and-forget invocation, Windows Communication Foundation offers one-way operations. After the client issues the call, Windows Communication Foundation generates a request message, but no correlated reply message will ever return to the client. As a result, one-way operations can't return values, and any exception thrown on the service side will not make its way to the client. One-way calls do not equate to asynchronous calls. When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call. However, once the call is queued, the client is unblocked and can continue executing while the service processes the operation in the background. This usually gives the appearance of asynchronous calls.

    Read the article

  • Sync custom AD properties to SharePoint Profile

    - by KunaalKapoor
    Here are some step-by-step instructions regarding configuring SharePoint to sync with custom AD attributes:Add the custom attribute in Active DirectoryThis part will have to be your doing; here is some documentation regarding creating customattributes in AD:http://msdn.microsoft.com/en-us/library/ms675085(VS.85).aspxhttp://technet.microsoft.com/en-us/magazine/2008.05.schema.aspxhttp://blogs.technet.com/b/isingh/archive/2007/02/18/adding-custom-attributes-in-active-directory.aspx2. Open up the miisclient.exe (C:\Program Files\Microsoft Office Servers\14.0\Synchronization Service\UIShell\miisclient.exe)a. This will have to be opened up with the farm admin account3. Click on "Management Agents" in the ribbon4. Right-click the Active Directory Management Agent ("MOSS-<name of sync connection>") and click "Refresh Schema"a. When prompted, enter the credentials for the farm account5. Once complete, close out of miisclient.exe6. Go into Central Admin --> Application Management --> Manage Service Applications --> Go into the User Profile Service Application7. Click on "Manage User Properties"8. Click on "New Property"9. Put in the correct information regarding the attribute that was created10. At the bottom of this page, under the "Source Data Connection" drop down, select the AD synchronization connection you have already configured11. For the "Attribute" drop down, select the new attribute you have created12. For the "Direction" drop down, select "Import"13. Click "OK"14. Run a full synchronization for the user profile service application and the custom property will get synced (as long as the attribute is set in Active Directory for the desired users)

    Read the article

  • Experience your music in a whole new way with Zune for PC

    - by Matthew Guay
    Tired of the standard Media Player look and feel, and want something new and innovative?  Zune offers a fresh, new way to enjoy your music, videos, pictures, and podcasts, whether or not you own a Zune device. Microsoft started out on a new multimedia experience for PCs and mobile devices with the launch of the Zune several years ago.  The Zune devices have been well received and noted for their innovative UI, and the Zune HD’s fluid interface is the foundation for the widely anticipated Windows Phone 7.  But regardless of whether or not you have a Zune Device, you can still use the exciting new UI and services directly from your PC.  Zune for Windows is a very nice media player that offers a music and video store and wide support for multimedia formats including those used in Apple products.  And if you enjoy listening to a wide variety of music, it also offers the Zune Pass which lets you stream an unlimited number of songs to your computer and download 10 songs for keeps per month for $14.99/month. Or you can do a pre-paid music card as well.  It does all this using the new Metro UI which beautifully shows information using text in a whole new way.  Here’s a quick look at setting up and using Zune on your PC. Getting Started Download the installer (link below), and run it to begin setup.  Please note that Zune offers a separate version for computers running the 64 bit version of Windows Vista or 7, so choose it if your computer is running these. Once your download is finished, run the installer to setup Zune on your computer.  Accept the EULA when prompted. If there are any updates available, they will automatically download and install during the setup.  So, if you’re installing Zune from a disk (for example, one packaged with a Zune device), you don’t have to worry if you have the latest version.  Zune will proceed to install on your computer.   It may prompt you to restart your computer after installation; click Restart Now so you can proceed with your Zune setup.  The reboot appears to be for Zune device support, and the program ran fine otherwise without rebooting, so you could possibly skip this step if you’re not using a Zune device.  However, to be on the safe side, go ahead and reboot. After rebooting, launch Zune.  It will play a cute introduction video on first launch; press skip if you don’t want to watch it. Zune will now ask you if you want to keep the default settings or change them.  Choose Start to keep the defaults, or Settings to customize to your wishes.  Do note that the default settings will set Zune as your default media player, so click Settings if you wish to change this. If you choose to change the default settings, you can change how Zune finds and stores media on your computer.  In Windows 7, Zune will by default use your Windows 7 Libraries to manage your media, and will in fact add a new Podcasts library to Windows 7. If your media is stored on another location, such as on a server, then you can add this to the Library.  Please note that this adds the location to your system-wide library, not just the Zune player. There’s one last step.  Enter three of your favorite artists, and Zune will add Smart DJ mixes to your Quickplay list based on these.  Some less famous or popular artists may not be recognized, so you may have to try another if your choice isn’t available.  Or, you can click Skip if you don’t want to do this right now. Welcome to Zune!  This is the default first page, QuickPlay, where you can easily access your pinned and new items.   If you have a Zune account, or would like to create a new one, click Sign In on the top. Creating a new account is quick and simple, and if you’re new to Zune, you can try out a 14 day trial of Zune Pass for free if you want. Zune allows you to share your listening habits and favorites with friends or the world, but you can turn this off or change it if you like. Using Zune for Windows To access your media, click the Collection link on the top left.  Zune will show all the media you already have stored on your computer, organized by artist and album. Right-click on any album, and you choose to have Zune find album art or do a variety of other tasks with the media.   When playing media, you can view it in several unique ways.  First, the default Mix view will show related tracks to the music you’re playing from Smart DJ.  You can either play these fully if you’re a Zune Pass subscriber, or otherwise you can play 30 second previews. Then, for many popular artists, Zune will change the player background to show pictures and information in a unique way while the music is playing.  The information may range from history about the artist to the popularity of the song being played.   Zune also works as a nice viewer for the pictures on your computer. Start a slideshow, and Zune will play your pictures with nice transition effects and music from your library. Zune Store The Zune Store offers a wide variety of music, TV shows, and videos for purchase.  If you’re a Zune Pass subscriber, you can listen to or download any song without purchasing it; otherwise, you can preview a 30 second clip first. Zune also offers a wide selection of Podcasts you can subscribe to for free. Using Zune for PC with a Zune Device If you have a Zune device attached to your computer, you can easily add media files to it by simply dragging them to the Zune device icon in the left corner.  In the future, this will also work with Windows Phone 7 devices. If you have a Zune HD, you can also download and add apps to your device. Here’s the detailed information window for the weather app.  Click Download to add it to your device.   Mini Mode The Zune player generally takes up a large portion of your screen, and is actually most impressive when run maximized.  However, if you’re simply wanting to enjoy your tunes while you’re using your computer, you can use the Mini mode to still view music info and control Zune in a smaller mode.  Click the Mini Player button near the window control buttons in the top right to activate it. Now Zune will take up much less of your desktop.  This window will stay on top of other windows so you can still easily view and control it. Zune will display an image of the artist if one is available, and this shows up in Mini mode more often than it does in the full mode. And, in Windows 7, you could simply minimize Zune as you can control it directly from the taskbar thumbnail preview.   Even more controls are available from Zune’s jumplist in Windows 7.  You can directly access your Quickplay links or choose to shuffle all music without leaving the taskbar. Settings Although Zune is designed to be used without confusing menus and settings, you can tweak the program to your liking from the settings panel.  Click Settings near the top left of the window. Here you can change file storage, types, burn, metadata, and many more settings.  You can also setup Zune to stream media to your XBOX 360 if you have one.   You can also customize Zune’s look with a variety of modern backgrounds and gradients. Conclusion If you’re ready for a fresh way to enjoy your media, Zune is designed for you.  It’s innovative UI definitely sets it apart from standard media players, and is very pleasing to use.  Zune is especially nice if your computer is using XP, Vista Home Basic, or 7 Starter as these versions of Windows don’t include Media Center.  Additionally, the mini player mode is a nice touch that brings a feature of Windows 7’s Media Player to XP and Vista.  Zune is definitely one of our favorite music apps.  Try it out, and get a fresh view of your music today! Link Download Zune for Windows Similar Articles Productive Geek Tips Redeem Pre-paid Zune Card Points for Zune Marketplace MediaUpdate Your Zune Player SoftwaredoubleTwist is an iTunes Alternative that Supports Several DevicesFind Free or Cheap Indie Music at Amie StreetAmie Street Downloader Makes Purchasing Music Easier TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >