Search Results

Search found 8979 results on 360 pages for 'backup sessions'.

Page 48/360 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • More Sessions At Central Coast Code Camp, Ruby/Cloud Computing

      Should Your Application Run In The Cloud Im back and sitting in Steve Evans Session, Should Your Application Run In The Cloud.  Hes now explaining how computers, since the stone age,... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Can only connect to file server on second attempt

    - by Ross Fleming
    I have a FreeNas file server on my local network and I usually connect to it from Windows and Ubuntu computers. Ever since I have upgraded from Ubuntu 12.04 to 12.10 Ubuntu will only connect after a second attempt. By which I mean, I will browse to it via the file manager and once I click on the link in "Bookmarks" it complains that it could not connect. If I then try again it connects successfully and will keep up it's connection until the laptop is suspended or looses connection to the LAN for whatever reason. This isn't much of a problem as I don't mind having to click twice but my real problem is that this means that my scheduled backup will complain that it cannot connect to the storage device if it has not already been accessed during the current session. If there is some way to either stop the issue all together or to force the backup tool (default) to immediately have a second attempt at connecting.

    Read the article

  • How does session middleware generally verify browser sessions?

    - by BBnyc
    I've been using session middleware to build web apps for years: from PHP's built-in session handling layer to node's connect session middleware. However, I've never tried (or needed) to roll my own session handling layer. How would one go about it? What sort of checks are necessary to provide at least some modicum of security against HTTP session highjacking? I figure setting a cookie with a token to keep track of the session, and then perhaps some check to see that the originating IP address of the session doesn't change and that the client browser software remains consistent. Hoping to hear about current best-practices...

    Read the article

  • Hijacking ASP.NET Sessions

    - by Ricardo Peres
    So, you want to be able to access other user’s session state from the session id, right? Well, I don’t know if you should, but you definitely can do that! Here is an extension method for that purpose. It uses a bit of reflection, which means, it may not work with future versions of .NET (I tested it with .NET 4.0/4.5). 1: public static class HttpApplicationExtensions 2: { 3: private static readonly FieldInfo storeField = typeof(SessionStateModule).GetField("_store", BindingFlags.NonPublic | BindingFlags.Instance); 4:  5: public static ISessionStateItemCollection GetSessionById(this HttpApplication app, String sessionId) 6: { 7: var module = app.Modules["Session"] as SessionStateModule; 8:  9: if (module == null) 10: { 11: return (null); 12: } 13:  14: var provider = storeField.GetValue(module) as SessionStateStoreProviderBase; 15:  16: if (provider == null) 17: { 18: return (null); 19: } 20:  21: Boolean locked; 22: TimeSpan lockAge; 23: Object lockId; 24: SessionStateActions actions; 25:  26: var data = provider.GetItem(HttpContext.Current, sessionId.Trim(), out locked, out lockAge, out lockId, out actions); 27:  28: if (data == null) 29: { 30: return (null); 31: } 32:  33: return (data.Items); 34: } 35: } As you can see, it extends the HttpApplication class, that is because we need to access the modules collection, for the Session module. Use with care!

    Read the article

  • Coordinating team code review sessions [closed]

    - by Wade Tandy
    My question has two parts: 1) In your team or organization, do you ever do in-person code reviews with all or part of a team, as opposed to online reviews using some sort of tool? 2) How do you structure these meetings? Do you choose to focus on one person's code in a given meeting? Do you look at everything? Take a random sample? Ask people on the team what they'd like to have looked at of theirs? I'd love to add this practice to my development team, so I'd like to hear how others are doing it.

    Read the article

  • Alternative Windows Offline Files + Windows Backup + Previous Version Setup

    - by Herson
    Currently our documents are all hosted in a Windows 7 box. Users can access the files using Windows share and the documents are available offline (windows 7 feature). The documents are being backed up daily by Windows 7 backup and restore utility. Users can access previous versions of the file (from the backups) using Windows Explorer "previous versions" feature. This setup is currently working well, except for the following: We would prefer to have access to hourly versions of the file, not daily. The previous version mechanism is tied up to the backup mechanism. Windows 7 performs a full backup every week and incremental backup everyday. The previous versions of a file is actually what are the available in the backups. If you 20GB documents and want to maintain at least three(3) year history, you will use at minimum 3 years * 52 weeks * 20GB or about 3TB even if there are few changes in the documents. Its pretty inefficient use of space. Looking up previous versions of a file is very slow (tens of minutes). This is probably related to the previous issue - Windows has to traverse its all of its backups. I am considering using SVN + autocommit/autoupdate tortoisesvn. It will have the following advantages: Backups are easy and will also backup the whole history of each documents. (Just backup the repository). Creating previous versions can be frequent. I think svn commit / update can be done every two minutes or so. Users can sync over the net. However, I can see the following issues: More conflicts than the original setup because both multiple users can now edit the same file even both are online, i.e. can connect to the SVN repo. The users can off course lock the file first before editing, but that would mean they have to adjust. Delay on propagation of file changes. On windows 7 file sharing, changes made by one online user will be instantaneously available to other online users. With the SVN setup, changes will only be propagated when the users execute the svn add/commit/update sequence. Delay will be probably a few minutes. This workflow will no longer work: "Hi, I just edited document X, can you have a quick look?" I would like to ask the opinion of the community for alternative setups, or improvements on the above setups to work out the kinks.

    Read the article

  • Why are Back In Time snapshots so large?

    - by Chethan S.
    I just backed up the contents of my home partition onto my external hard drive using Back In Time. I browsed to the backed up contents in the external drive and under properties it showed me the size as 9.6 GB. As I read that in next snapshots I create, Back In Time does not backup everything but creates hard links for older contents and saves newer contents, I wanted to test it. So I copied two small files into my home partition and ran 'Take Snapshot' again. The operation completed within a minute - first it checked previous snapshot, assessed the changes, detected two new files and synced them. After this when I browsed to the backed up contents, I was surprised to see the newer and older backup taking up 9.6 GB each. Isn't this a waste of hard drive space? Or did I interpret something wrongly?

    Read the article

  • Domain controller failed to restore using windows backup tools

    - by Peilin
    Domain controller failed to restore using windows backup tools One Issue: One 08 R2 domain controller with fully daily backup(only one controller in this company) is out of services due to hardware issue. The below two methods i tried to recover to the new purchasing server,but it is fail. 1)First Method: Using the windows 2008 R2 CD to boot and carry out recover from backup. Everything is OK, but after reboot it will come out blue screen and restart again. 2)Second Method: a)Install the OS in this new server b) Reboot the server to DSRM. c)Using the Windows Backup Tools to restore the system states only After reboot, it will come out the blue screen error and restart again. I know this is may be the different hardware issue, but how to resolve? Or can we only restore the AD services not whole system status? Any suggestion?

    Read the article

  • How to exclude directories from Mozy custom backup sets using spotlight queries

    - by bromfiets
    I would like to create custom backup sets for Mozy which exclude certain directories. For example, I would like to backup my Itunes folder, but exclude all podcasts. I have created a backup set which searches in /Users/me/Music and used this query kMDItemPath == "*Podcasts*"wc to exclude all matching files. However, nothing matches. Queries which use the kMDItemFSName spotlight attribute work fine, but any query using kMDItemPath doesn't seem to work at all. What am I doing wrong?

    Read the article

  • Why does Hyper-V and Windows Backup crash (BSOD) after successfull backup?

    - by Payson Welch
    Hello I am running Server 2008 R2 with a handful of Hyper-V guest nodes. If Windows backup runs without any of the Hyper-V nodes running, the server is fine. If Hyper-V runs a backup while the Hyper-V nodes are running, it is fine until a few minutes after the backup completes, and then it BSODs. The storage location for the backup is iSCSI - I am wondering if anyone has any input on what might be causing this? I don't have the Hyper-V nodes setup on a vlan and there is only one NIC on the server. Is it possible this is a networking / driver issue, and if so how would I reconfigure the networking to fix this?

    Read the article

  • PHP on several servers with session-sharing

    - by Etu
    there's certanly other threads about this, but I have one more question. We are about to scale the website at work to have more than one server. And we need to share the sessions between the servers. We have been looking into different solutions, one in memcached and use Memcached as sessionhandler in PHP. That will probably work. And the idea would be to run memcached on every machine and let all webservers access all other servers memcached servers, and then we have shared sessions between the machines, yay. (we have no resources to setup with sticky-sessions yet, that's a later project. we need this running, and we need this running now. and we will loadbalance with DNS for a starter) But then... If I want to take one server down, say, for maintenance, or a server crashes, or whatever reason. I don't want the users to just loose their sessions and have to start from the beginning... That's why we need some kind of replication, which Memcached does not support. Then I found http://repcached.lab.klab.org/ -- which has multi-master replication of memcached, which is great, and is what I want. But does it work with 2 machines? Say 3, 5, 10? For future scaling. I also looked into redishttp://redis.io/ -- which also seems great, but is a bit more "shaky" with the php-session-handler support, and no multi-master-replication. The thing is that I like to use memcached, but I want to be able to power down one of two boxes without loosing half of the sessions. Any suggestions?

    Read the article

  • Extracting httpdocs from Plesk Panel 9.5.4 Webserver backup file

    - by Paddington
    Good day, I am having problems manually extracting domains from Plesk 9.5 backup that was FTPed onto my back up server. I have followed the article http://kb.parallels.com/en/1757 using method 2. The problem is here: zcat DUMP_FILE.gz DUMP_FILE My backup file CP_1204131759.tar is a tar archive and zcat does not work with it. So I proceed to run the command: cat CP_1204131759.tar CP_1204131759. But when I try # cat CP_1204131759 | munpack I get an error that munpack did not find anything to read from standard input. I went on to extract the tar backup file using the xvf flags and got a lot of files (20) similar to these ones: CP_sapp-distrib.7686-0_1204131759.tgz CP_sapp-distrib.7686-35_1204131759.tgz CP_sapp-distrib.7686-6_1204131759.tgz How best can I extract the httpdocs of a domain from this server wide Plesk 9.5.4 backup?

    Read the article

  • BUILD 2013 Sessions&ndash;Building Great Windows Phone UI in XAML

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/build-2013-sessionsndashbuilding-great-windows-phone-ui-in-xaml.aspx Even the simplest of smart phone apps can be a challenge to give a compelling UI regardless of the platform.  Windows Phone and XAML are no exception.  That is what got my interest in this session by Shawn Oster.  He took a checklist type approach to the subject is good considering that is about the only way that many us get things done. Shawn started out giving us a set of bad design/good design examples.  They very effectively showed how good design gives a sense of professionalism to your app that could determine if your wonderful idea actually makes money is DOA. I won’t go over all his points since you will be able to get the session online, but a few of his checklist points included design from the beginning instead of as an afterthought, not being afraid to leave white space and making sure your application elegantly supports both landscape and portrait modes.  The many gems make this a must watch for any developers who struggle with visual design. del.icio.us Tags: BUILD 2013,Windows Phone,XAML,Design

    Read the article

  • Issue with Windows Server backup

    - by mamu
    I have windows server 2008 r2 installed, Only service running on it is hyper-v. I am trying to take backup using windows server backup feature and it fails with following error in eventlog The backup operation that started at '?2009?-?08?-?22T18:42:14.123000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. Above error itself is point to other event logs for more detail but i can't find anything in event logs Then i ran following command vssadmin list writers It had following out of ordinary in list Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {d15c5f78-121c-464f-b23b-f285e919b05c} State: [8] Failed Last error: Inconsistent shadow copy How could i resolve this?

    Read the article

  • Big Data Sessions at Openworld 2012

    - by Jean-Pierre Dijcks
    If you are coming to San Francisco, and you are interested in all the aspects to big data, this Focus On Big Data is a must have document.  Some (other) highlights: A performance demo of a full rack Big Data Appliance in the engineered systems showcase A set of handson labs on how to go from a NoSQL DB to an effective analytics play on big data Much, much more See you all in a few weeks in SF!

    Read the article

  • Moving from a traditional in memory Java session to persistent storage sessions

    - by Benju
    We have decided to take the plunge and move from using a typical java session provider in Tomcat/Jetty/etc to persisting everything to a central datastore. We are looking at using MongoDB for this. A few options come to mind... http://wiki.eclipse.org/Jetty/Tutorial/MongoDB_Session_Clustering This is nice because it will "auto-magically" persist our session to a Mongo installation. I am concerned however that we will not have fine grained control of what is happening. https://github.com/mattinsler/com.lowereast.guiceymongo/ GuiceMongo is interesting as it integrates with Guice. Perhaps we could persist everything via this ORM. Has anybody had to deal with this kind of move? It seems that moving from in memory to persistent session storage has a lot of gotchas.

    Read the article

  • best cloud storage + rsnapshot

    - by humbledude
    I’ve started using rsnapshot as my backup system for home PC. I really like the idea of hard links and how they are handled. But can’t find best workflow. Currently I keep my snapshots on the same partition and let’s say, copy newest one to a pendrive at the end of the week. Cloud storage is what I’m looking for. As of rsnapshot, Dropbox doesn’t fit my needs. More over there is no way to make it respect hard links — all snapshots are treated as a full snapshot. Renting a server is pretty expensive so my question is, are there better alternatives for backup in the cloud? I would like to benefit from hard links and send only incremental backups, just like in my local host.

    Read the article

  • Error while taking Transaction log backup

    - by Divya Kapoor
    Hello, I have sceduled a Transaction log back up schedule. But the backup is not happening. The error in the logs is this: Transaction Log Backup.Subplan_1,Error,0,ARCOTDB1\ARCOT_DB_INST1,Transaction Log Backup.Subplan_1,(Job outcome),,The job failed. Unable to determine if the owner (ARCOT-DB1\Superuser) of job Transaction Log Backup.Subplan_1 has server access (reason: Could not obtain information about Windows NT group /user 'ARCOT-DB1\Superuser'<c/> error code 0x534. [SQLSTATE 42000] (Error 15404)) Please help!

    Read the article

  • Ubuntu 14.04 Fatal Exception

    - by user286534
    I use Ubuntu 14.04, 64-bit. I installed Virtualbox and was testing another Linux OS (Deepin). My system froze and I could not get to a TTY session to reboot. I had to do a hard restart and when Ubuntu restarted I got various error codes one of which was "kernel panic - fatal exception in interrupt" Booting to Advance mode and attempting repairs did not work (fsck, grub repair, etc) I reinstalled Ubuntu and chose the option to keep my files intact. I can now access my system but many programs I have installed do not work. My question is; I have a Deja-Dup backup (but only of my Home directory), is it better to restore my backup files or do I have to reinstall all of my programs? The weird thing is, the programs I verified using the Software Center to see if they were installed are checked as installed, but won't appear in Dash.

    Read the article

  • Want to back up using dd, but my present ubuntu installation is 149.04 + 3.81(swap) GB, my target drive is only 149.05 GB

    - by Shreshth
    My netbook is a Windows7-Ubuntu 12.04 dual boot. in gparted the strcture looks like Partition filesystem size /dev/sda2 extended 152.86GiB __/dev/sda6 ext4 149.04GiB __/dev/sda5 linux-swap 3.81GiB /dev/sda3 ntfs 100MiB /dev/sda4 ntfs 145.13GiB /dev/sdb1 fat32 149.05GiB I want to backup my ubuntu 12.04 installation that is sda2 (sda6 + sda5) to sdb1. As you can see sda5 +sda6 is 152.86 GB where are sdb1 is only 149.05 GB. Can I backup only sda6(149.04GB) without losing any data? That is to say, will I be able to restore my ubuntu using only sda6 and later add the needed swap? Edit: Made it readable.

    Read the article

  • SQL Server 2008 Restore from Backup fails with error 3241 'cannot process this media family'

    - by pearcewg
    I am attempting to backup a database from a SQL Server instance on one machine and restore it to another, and I am encountering the frequently discovered 'SQL Server cannot process this media family' error. Each of my instances are SQL Server 2008, but with different patch levels Restore: 10.0.2531.0 Backup: 10.0.1600.22 ((SQL_PreRelease).080709-1414 ) The restore DB is express. Not sure about the backup version. The backup version is on a virtual private server. The restore is on my development box. When I restore to a different database on the source (backup) server, it restores fine. Lots of stuff on google about this issue, some on stackoverflow about this issue, but nothing which is this exact situation. Any thoughts? It should be straightforward to do a backup and restore from one machine to another (having done this thousands of times in with SQL 6.5,7,2000,2005).

    Read the article

  • System State Backups using NTbackup fail with error 0x800423f4 (relating to volume shadow copy)

    - by Paul Zimmerman
    We have a Windows Server 2003 R2 running Service Pack 2. It is a domain controller (Global Catalog) and our main internal DNS server. We run a System State backup of the machine to back up Active Directory information and save the backup to a different server. This server has a single drive (C:), and we do have Shadow Copies enabled for the volume (which are completing successfully). The System State Backup is now failing with the following listed in the backup logs: Volume shadow copy creation: Attempt 1. "Event Log Writer" has reported an error 0x800423f4. This is part of System State. The backup cannot continue. Error returned while creating the volume shadow copy:800423f4 Aborting Backup. The operation did not successfully complete. When doing a vssadmin list writers, we sometimes get the following reported for the Event Log Writer (other times it says that it is in the state of "[1] Stable" with "No error"): Writer name: 'Event Log Writer' Writer Id: {eee8c692-67ed-4250-8d86-390603070d00} Writer Instance Id: {c7194e96-868a-49e5-ba99-89b61977753c} State: [8] Failed Last error: Retryable error We have tried disabling the event log service via the registry, rebooting, deleting the event log files from the drive, then re-enabling the service via the registry and rebooting, but this didn't seem to solve the issue. We also get an error message when in the event viewer when trying to open the log for the "File Replication Service" of "Unable to complete the operation on 'File Replication Service'. The security descriptor structure is invalid." I have searched the error via Google and tried a number of different things, but nothing has seemed to help. Any suggestions on what we might try to get the Event Log Writer to behave would be greatly appreciated!

    Read the article

  • Ideal Bacula appliance?

    - by Ricket
    I'm an intern at a small company and we (the IT department of two) manage <100 client computers and a handful of servers. Currently we're using a company's appliance to handle backup; it does a small backup every night and a full backup every weekend, and a guy comes on Wednesday to take an offsite backup drive (and gives back last week's drive to swap with it). Lately this system, mainly the appliance, has been having problems, so we are looking for an alternative. I'm researching other companies but also looking into what we might expect from trying to do this ourselves. There will undoubtedly be a large learning curve, but hey, that's what serverfault is for, right? :) So anyway I was looking at Bacula. Feature list sounds great, documentation is plentiful, but it's only software. So my question is, what is the ideal backup server to run the Bacula server software on? And not only the server but other related appliances. Our current backup appliance uses only hard drives, not tape drives. It has several plugged into it at one time, in hotswap bays on the front of the machine. I couldn't help but notice though, it's hardly more than Windows XP with hard drive bays, a PCI eSATA card (which connects to another appliance extension piece with 2 more bays), and their software. Since the company will take back their appliance if/when we cancel with them, where can I go to configure a server with these kinds of things? Maybe I'm being naive, I'm sure Dell (and any other computer company) sells them in the small business section of their website, but I wanted to make sure that there's not some other more recommended place that other companies are getting their hardware from, and that I don't need anything special for Bacula.

    Read the article

  • Can I change the file system on the OS partition on Server 2008 R2?

    - by KCotreau
    I have a client using R1Soft Continuous Data Protection backup, and two of the Server 2008 R2 boxes were erroring out with these errors: Unable to obtain NTFS volume data for device '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}': Incorrect function. Unable to discover information for filesytem volume '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}'; Unable to obtain NTFS volume So I backed up all the registry entries with this, {f612849e-7125-11e0-8772-806e6f6e6963}, in it, and deleted them based on some VERY sparse info from R1Soft. I then decided to restore them before I rebooted, and do a system state backup first using MS backup, and even it errored out saying that there were FAT32 partitions. This was a major clue as the only two computers with problems had these FAT32 partitions. I figured if MS backup can't backup something, any other program is likely to have problems. Also, now that I realized the servers had FAT32 partitions on them, the error referencing NTFS takes on more weight. The partitions on both servers have the label "OS", but on one of the computers, it is given a letter, but on the other not. So I am thinking if I just convert the file systems from FAT32 to NTFS, it may solve the backup problem. So the question is this: Can I just convert those partitions, and does anyone have any concrete knowledge of any major downsides, like the servers not coming back up (of course, I would do one at a time)? My thinking is that the answer is probably at least 95% no, but they are production servers, so I wanted to get some second opinions.

    Read the article

  • SQL Server 2008 Logshipping not Restoring

    - by Nai
    I am getting the following errors during the restore part of the Logshipping process on my secondary server: 2010-04-01 10:00:01.85 Error: The file 'F:\UK_20100327090001.trn' is too recent to apply to the secondary database 'UK_Backup'.(Microsoft.SqlServer.Management.LogShipping) 2010-04-01 10:00:01.85 Error: The log in this backup set begins at LSN 55408000007387500001, which is too recent to apply to the database. An earlier log backup that includes LSN 55147000001788900001 can be restored. RESTORE LOG is terminating abnormally.(.Net SqlClient Data Provider) 2010-04-01 10:00:01.87 Searching for an older log backup file. Secondary Database: 'UK_Backup' 2010-04-01 10:00:01.90 Skipped log backup file. Secondary DB: 'UK_Backup', File: 'F:\UK_20100324090000.trn' 2010-04-01 10:00:01.93 Error: Could not find a log backup file that could be applied to secondary database 'UK_Backup'.(Microsoft.SqlServer.Management.LogShipping) 2010-04-01 10:00:01.93 Deleting old log backup files. Primary Database: 'UK' 2010-04-01 10:00:01.96 The restore operation completed with errors. Secondary ID: 'c066bb63-930c-4b73-861c-f59f0a38c12c' It was happily humming along until I checked it this morning. Some additional details. In the Logshipping folder, there is one file UK_20100324090001.trn dated on the 2009-3-24. The next most recent .trn file is the UK_20100374090001.trn which is the file that was applied during the restore. Why is there an older trn file seemingly on it's own? How can I fix this problem? It'll be a real pain to restart the entire logshipping process. x_x

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >