Search Results

Search found 14610 results on 585 pages for 'session storage'.

Page 45/585 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Mounting a Mail Store that is in a Recovery Storage group On Exchange 2003

    - by Kyle Brandt
    If I have a production server with the Mail Store Foo in both the storage group companyName and the Recovery Storage Group, is it okay to Mount Foo in the RSG while it is mounted in companyName so I can extract some mailboxs from the recovery storage group? Basically I am wonder if it is okay to mount it in both Production and the Recovery Storage Groups while the mail server is in production and the particular mail store is in production. Reference: "Once an RSG is restored into and mounted up you can connect to it with ExMerge and read out mailboxes into PST files for merging back into a 'live' store" -- http://serverfault.com/questions/49728/test-restore-of-exchange-dbs-with-the-ms-exchange-plugin-of-netbackup-6

    Read the article

  • Performance Improvement: Session State

    Performance is critical to today's successful applications and web sites. If you design with an awareness of the session state management challenges you can always change your strategies to match your performance needs.

    Read the article

  • Performance Improvement: Session State

    Performance is critical to today's successful applications and web sites. If you design with an awareness of the session state management challenges you can always change your strategies to match your performance needs.

    Read the article

  • Slides and Links from SQL Azure session at BizSpark Azure Day in London

    - by Eric Nelson
    A big thanks to all who attended my two sessions on SQL Azure yesterday (29th March 2010). As promised, my slides and links from the session. SQL Azure Overview for Bizspark day View more presentations from Eric Nelson. Related Links: UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure SQL Azure maximum database size rises from 10GB to 50GB in June TCO and ROI calculator for Windows Azure SQL Azure Migration Wizard

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Merge Records in Session bean by using ADF Drag/Drop

    - by shantala.sankeshwar
    This article describes how to merge multiple selected records in Session Bean using ADF drag & drop feature. Below described is simple use case that shows how exactly this can be achieved. Here we will have table & user input field.Table shows  EMP records & user input field accepts Salary.When we drag & drop multiple records on user input field,the selected records get updated with the new Salary provided. Steps: Let us suppose that we have created Java EE Web Application with Entities from Emp table.Then create EJB Session Bean & generate Data control for the same. Write a simple code in sessionEJBBean & expose this method to local interface :  public void updateEmprecords(List empList, Object sal) {       Emp emp = null;       for (int i = 0; i < empList.size(); i++)       {        emp = em.find(Emp.class, empList.get(i));         emp.setSal((BigDecimal)sal);       }      em.merge(emp);   } Now let us create updateEmpRecords.jspx page in viewController project & Drop empFindAll object as ADF Table Define custom SelectionListener method for the table :   public void selectionListener(SelectionEvent selectionEvent)     {     // This method gets the Empno of the selected record & stores in the list object      UIXTable table = (UIXTable)selectionEvent.getComponent();      FacesCtrlHierNodeBinding fcr      =(FacesCtrlHierNodeBinding)table.getSelectedRowData();      Number empNo = (Number)fcr.getAttribute("empno") ;      this.getSelectedRowsList().add(empNo);     }Set table's selectedRowKeys to #{bindings.empFindAll.collectionModel.selectedRow}"Drop inputText on the same jspx page that accepts Salary .Now we would like to drag records from the above table & drop that on the inputtext field.This feature can be achieved by inserting dragSource operation inside the table & dropTraget operation inside the inputText:<af:dragSource discriminant="tab"/> //Insert this inside the table<af:inputText label="Enter Salary" id="it13" autoSubmit="true"       binding="# {test.deptValue}">       <af:dropTarget dropListener="#{test.handleTableDrop}">       <af:dataFlavor        flavorClass="org.apache.myfaces.trinidad.model.RowKeySet"    discriminant="tab"/>       </af:dropTarget>       <af:convertNumber/> </af:inputText> In the above code when the user drags & drops multiple records on inputText,the dropListener method gets called.Goto the respective page definition file & create updateEmprecords method action& execute action dropListener method code:        public DnDAction handleTableDrop(DropEvent dropEvent)        {          //Below code gets the updateEmprecords method,passes parameters & executes method            DataFlavor<RowKeySet> df = DataFlavor.getDataFlavor(RowKeySet.class);            RowKeySet droppedKeySet = dropEvent.getTransferable().getData(df);            if (droppedKeySet != null && droppedKeySet.size() > 0)           {                  DCBindingContainer bindings =                  (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();                  OperationBinding updateEmp;                  updateEmp= bindings.getOperationBinding("updateEmprecords");                  updateEmp.getParamsMap().put("sal",                  this.getDeptValue().getAttributes().get("value"));                            updateEmp.getParamsMap().put("empList", this.getSelectedRowsList());                  updateEmp.execute(); //Below code performs execute operation to refresh the updated records                 OperationBinding executeBinding;                 executeBinding= bindings.getOperationBinding("Execute");                 executeBinding.execute(); AdfFacesContext.getCurrentInstance().addPartialTarget(dropEvent.getDragComponent());                this.getSelectedRowsList().clear();          }                 return DnDAction.NONE;        }Run updateEmpRecords.jspx page & enter any Salary say '5000'.Select multiple records in table & drop these selected records on the inputText Salary. Note that all the selected records salary value gets updated to 5000.Technorati Tags: ADF Drag and drop,EJB Session bean,ADF table,inputText,DropEvent  

    Read the article

  • SQLBits IV session voting open

    We've now closed session submission for SQLBits IV, which will be taking place on March 28th in Manchester. Once again we've had a great response and it's now time to vote for which of the 83 submitted sessions you'd like to see; to do this you need to register on the site and then go to http://www.sqlbits.com/information/PublicSessions.aspx and choose the sessions you'd like to see. Darren and I have both submitted sessions.

    Read the article

  • The PASS Board of Directors Q&A Session

    - by andyleonard
    Friday afternoon (18 Oct 2013), the PASS Board of Directors met with interested members of the SQL Server Community to answer questions. Paraphrases of some questions and notes I collected during the session follow (Please note: this is not a transcript): Elections Kendall Van Dyke asked about duplicate voting. The Board responded that they had looked into the matter and identified duplicate memberships based on names and addresses, but with different email addresses. After filtering for duplicate...(read more)

    Read the article

  • Web application and remote storage of files

    - by Matt
    Hi have a web application that can store lots and lots of files on the server. i.e. users upload data to it. The files are stored below a particular storage path. The web host will be an IBM xseries 345. However, the disks are really expensive so we would like to put the files onto a less expensive server. Now here is the question. Should I use an NFS mount on the IBM server of a path on the storage server? Or should I write some scripts to upload the files to the storage server instead. Both the storage server and the web host are on the same network. Only the web server is visible to the world. Is NFS performance suitable for an expected low to moderately loaded server?

    Read the article

  • Left mouse button not working in Xubuntu session

    - by giodamelio
    I recent changed from Ubuntu to Xubuntu 12.04. The install worked great for a few days, but suddenly the left mouse button stopped working. The right click and scroll bars work fine. After a bit of experimenting I discovered that the problem only happens when I set the session to Xubuntu at login. The mouse also works fine in my dual-booted Windows vista. What could make my mouse stop working like that?

    Read the article

  • Oracle OpenWorld Latin America 2012 - Middleware Session

    - by Roberto Monteiro
    Oracle Fusion Middleware PaaS and Oracle Java Cloud Service   Roberto Monteiro, Senior Sales Consultant, OracleIn this session, learn how Oracle Fusion Middleware platform as a service (PaaS) can supercharge productivity with instant access to a platform for developing and deploying business applications in the cloud, complete with integrated security and database access. See how these capabilities are used by Oracle Java Cloud Service.  Dec 4 - 17:15 - Mezzanine: Room 7

    Read the article

  • TechDays session

    - by barryoreilly
    Here's a link to my TechDays session from last month: http://www.microsoft.com/sverige/techdays/inspelning.aspx?d=http://download.microsoft.com/download/2/6/5/2655127A-A872-4FA3-8E64-28CCA07B618C/53.pdf&m=http://mediadl.microsoft.com/mediadl/www/s/sverige/techdays10/53.wmv

    Read the article

  • "Bad response to Storage command" when scheduling job with Bacula

    - by Joril
    I have a Bacula setup with 9 clients, and it's working happily. Today I had to add another client, so I went and copied+adapted the existing configuration files from another client, but when I schedule a job for the new client, I get these errors: 20-Mar 17:50 tools-dir JobId 39: Start Backup JobId 39, Job=BackupPresenze2.2012-03-20_17.50.49_04 20-Mar 17:50 tools-dir JobId 39: Using Device "FileStorage" 20-Mar 17:50 presenze2-fd JobId 39: Fatal error: Failed to connect to Storage daemon: bacula.mylan.local:9103 20-Mar 17:50 tools-dir JobId 39: Fatal error: Bad response to Storage command: wanted 2000 OK storage , got 2902 Bad storage From the client I can telnet to bacula.mylan.local:9103 just fine, and jobs for other clients work successfully... What could I check? (Server and client run Ubuntu 10.04, if it's relevant)

    Read the article

  • my application icon won't show in Ubuntu 12.04 session fallback mode

    - by Nelson Teixeira
    I have a Python application that shows a systray icon, but it just won't appear in Ubuntu 12.04 session fallback mode (Gnome Classic WITH effects). It appears in 10.04, and in 12.04 with Unity. The problem is just in Gnome Classic. I've already set: gsettings set com.canonical.Unity.Panel systray-whitelist "['all']" and I installed indicator-applet-complete but Alt-Win-right click won't work Anyone has any idea what can be wrong ?

    Read the article

  • How can I set up a dual-site Storage Daemon in Bacula (mirror the backup)

    - by Andy
    On site A, I have sucessfully set up a bacula director on one host, several File Daemons on the hosts I want to backup, and finally one Storage Daemon where the backup actually is stored. If disaster struck the building Site A, I want a second Storage Daemon on another site, Site B. The Filesets, Director etc would be the same, except the jobs will be stored on the other Storage Daemon as well. Are there any best practises on this?

    Read the article

  • SQL TuneIn Zagreb 2014 – Session material

    - by Hugo Kornelis
    I spent the last few days in Zagreb, Croatie, at the third edition of the SQL TuneIn conference , and I had a very good time here. Nice company, good sessions, and awesome audiences. I presented my “Understanding Execution Plans” precon to a small but interested audience on Monday. Participants have received a download link for the slide deck. On Tuesday I had a larger crowd for my session on cardinality estimation. The slide deck and demo code used for that presentation will be available through...(read more)

    Read the article

  • Alternatives to using cookies?

    - by theclueless1
    Whate are alternatives to using cookies/client-side storage for a PHP/MySQL based site on Apache. Scenario/Requirements: I want to try using some anti-bot code to prevent specific scrapers etc. from accessing the site. I would like to run this code before launching the rest of the site (before DB access etc.). I don't want to constantly run the same code on every page-load after a visitor has passed the initial check. I'd like to avoid the use of Cookies/Client side storage if at all possible. The only solution I can currently think of is to write files to the server based on the visitors IP/UA, or to write a list of them to a single file. Yet this has the limitation of multiple users through a proxy/same connection, etc ... So, any ideas/suggestions? Or am I simply over working the issue?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >