Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 322/1338 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Clone a Red Hat RAID as part of a disaster recovery plan

    - by Campo
    I am looking for recommendations to clone a Red Hat mirrored raid to a single hard drive located in the same machine. The idea is if the servers hardware ever has an issue we have a similar hardware machine ready to go. All we would have to do is pop in the cloned drive. If the servers RAID ever failed we could just switch to the single drive to maintain uptime and restore the original configuration on the spare server with a backup. This is a restaurant and they are open 7 days a week. We do have time from 12:am to 9:00am to perform the necessary steps for a clone and we talking about under 10 Gigs of information. There is a database on the server. I have looked into Rsync and Clonezilla. But I am just not confident either is capable of completing the task I want. Looking for some suggestions and possibly a step by step if you could be so kind.

    Read the article

  • Loading Entities Dynamically with Entity Framework

    - by Ricardo Peres
    Sometimes we may be faced with the need to load entities dynamically, that is, knowing their Type and the value(s) for the property(ies) representing the primary key. One way to achieve this is by using the following extension methods for ObjectContext (which can be obtained from a DbContext, of course): 1: public static class ObjectContextExtensions 2: { 3: public static Object Load(this ObjectContext ctx, Type type, params Object [] ids) 4: { 5: Object p = null; 6:  7: EntityType ospaceType = ctx.MetadataWorkspace.GetItems<EntityType>(DataSpace.OSpace).SingleOrDefault(x => x.FullName == type.FullName); 8:  9: List<String> idProperties = ospaceType.KeyMembers.Select(k => k.Name).ToList(); 10:  11: List<EntityKeyMember> members = new List<EntityKeyMember>(); 12:  13: EntitySetBase collection = ctx.MetadataWorkspace.GetEntityContainer(ctx.DefaultContainerName, DataSpace.CSpace).BaseEntitySets.Where(x => x.ElementType.FullName == type.FullName).Single(); 14:  15: for (Int32 i = 0; i < ids.Length; ++i) 16: { 17: members.Add(new EntityKeyMember(idProperties[i], ids[i])); 18: } 19:  20: EntityKey key = new EntityKey(String.Concat(ctx.DefaultContainerName, ".", collection.Name), members); 21:  22: if (ctx.TryGetObjectByKey(key, out p) == true) 23: { 24: return (p); 25: } 26:  27: return (p); 28: } 29:  30: public static T Load<T>(this ObjectContext ctx, params Object[] ids) 31: { 32: return ((T)Load(ctx, typeof(T), ids)); 33: } 34: } This will work with both single-property primary keys or with multiple, but you will have to supply each of the corresponding values in the appropriate order. Hope you find this useful!

    Read the article

  • Setting up a Google Analytics Campaign

    - by Ashfame
    I will be doing a bunch of things to give one of my projects (main app) a big initial push for which I will be building a few small Facebook apps which will help in promoting the main apps. Traffic from these apps need to be tracked individually. My main app will be posting on the walls when the user needs to be notified. Traffic from these posts need to be tracked. Traffic from emails sent by the main app need to be tracked, like different types of email. I need to track all of these & possibly a couple of more but I need to be sure that I build my campaign URLs correctly as I won't get another chance to fix it. Correct me where I am wrong: Campaign Name: Launch Campaign Medium: Email Campaign Source: Type1 or Type2 (I can break it down for different types of email, right?) For apps: Campaign Name: Launch Campaign Medium: Apps Campaign Source: App1 or App2 (I can break it down here for different apps, right?) What if I want to track two different links within a single email or a single app? Any way of tracking them individually too but still keeping to track them as one because tracking them as one makes more sense for me. Campaign Term & Campaign content is irrelevant in my case, or I can/should use them for something? And I will also be tracking traffic of different apps. Should I do more? Let me know if my scenario wasn't clear enough & I need to explain more.

    Read the article

  • MSSQL: Choice of service accounts

    - by Troels Arvin
    When installing MS SQL Server 2008, one needs to associate a service account with the installation (possibly even several accounts, one for the SQL Server Agent, one for Analysis Services, ..., but let's leave that for the case of simplicity). The service account may be local account, or a Windows domain account. If a domain account is used: Can MSSQL start, if connectivity to the domain controllers is temporarily down? If the answer is yes: Should each DBMS instance on each server have a separate account, or does it make sense to use a particular "MSSQL" domain account on all MSSQL-installations in the organization? If separate accounts are used for each instance on each server: Does it make sense to create a special MSSQL security group in the domain and place all the MSSQL service accounts in that group, perhaps to ease replication, etc? Is there a common, generally accepted naming convention for MSSQL service account(s)?

    Read the article

  • How do you keep up with Nagios/Capistrano configs when using EC2?

    - by imaginative
    I use Amazon EC2 for my mobile app. Depending on load of the application at a given time, I might spawn new instances and then take them down when load is lower to save costs. How does one keep up with Nagios configurations for such a dynamic environment? When one deals with managed hardware, configuration files are predictable. In this case Nagios, Capistrano and a bunch of other configuration files would need to be added. Capistrano needs to know where to deploy a new build to for an app server. Nagios needs to know to remove an existing instance or add a new instance for monitoring. Nagios also needs to know if a node was intentionally taken down or if the host is down due to error. How is this done with the wonderful world of VPS/dynamic instances?

    Read the article

  • LDAP encrypt attribute that extends userpassword

    - by Foezjie
    In my current LDAP schema I have an objectclass (let's call it group) that has 2 attributes that extend userpassword. Like this: attributeType ( groupAttributes:12 NAME 'groupPassword1' SUP userPassword SINGLE-VALUE ) attributeType ( groupAttributes:13 NAME 'groupPassword2' SUP userPassword SINGLE-VALUE ) group extends organisation so already has a userpassword attribute. If I use that to enter a new group using PHPLDAPAdmin it uses SSHA (by default) and encrypts/hashes the password I entered. But the passwords I entered for groupPassword1 en groupPassword2 don't get encrypted. Is there a way to make it so that those attributes are encrypted too?

    Read the article

  • How to implement behavior in a component-based game architecture?

    - by ghostonline
    I am starting to implement player and enemy AI in a game, but I am confused about how to best implement this in a component-based game architecture. Say I have a following player character that can be stationary, running and swinging a sword. A player can transit to the swing sword state from both the stationary and running state, but then the swing must be completed before the player can resume standing or running around. During the swing, the player cannot walk around. As I see it, I have two implementation approaches: Create a single AI-component containing all player logic (either decoupled from the actual component or embedded as a PlayerAIComponent). I can easily how to enforce the state restrictions without creating coupling between individual components making up the player entity. However, the AI-component cannot be broken up. If I have, for example, an enemy that can only stand and walk around or only walks around and occasionally swing a sword, I have to create new AI-components. Break the behavior up in components, each identifying a specific state. I then get a StandComponent, WalkComponent and SwingComponent. To enforce the transition rules, I have to couple each component. SwingComponent must disable StandComponent and WalkComponent for the duration of the swing. When I have an enemy that only stands around, swinging a sword occasionally, I have to make sure SwingComponent only disables WalkComponent if it is present. Although this allows for better mix-and-matching components, it can lead to a maintainability nightmare as each time a dependency is added, the existing components must be updated to play nicely with the new requirements the dependency places on the character. The ideal situation would be that a designer can build new enemies/players by dragging components into a container, without having to touch a single line of engine or script code. Although I am not sure script coding can be avoided, I want to keep it as simple as possible. Summing it all up: Should I lob all AI logic into one component or break up each logic state into separate components to create entity variants more easily?

    Read the article

  • Custom Transport Agent: How do I collect NDRs and all other undeliverables in Exchange 2010 from the Postmaster?

    - by makerofthings7
    I'm trying to collect all NDRs in a single mailbox for all invalid recipients, and anything that fails for any reason. I have a custom transport agent, that I've written myself that appears here: [PS] C:\Windows\system32>Get-TransportAgent Identity Enabled Priority -------- ------- -------- Transport Rule Agent True 1 Text Messaging Routing Agent True 2 Text Messaging Delivery Agent True 3 Routing Rule Agent True 4 **** Sometimes when I run get-messagetrackinglog I get failures like this below RunspaceId : 4ecc61fb-13b9-4506-b680-577222c9bf21 Timestamp : 10/14/2013 12:42:42 PM ClientIp : ClientHostname : Exchange1 ServerIp : ServerHostname : SourceContext : Routing Rule Agent ConnectorId : Source : AGENT EventId : FAIL InternalMessageId : 4416 MessageId : <[email protected]> Recipients : {[email protected]} RecipientStatus : {} TotalBytes : 4542 RecipientCount : 1 RelatedRecipientAddress : Reference : MessageSubject : review CGRC due diligence. Sender : [email protected] ReturnPath : [email protected] MessageInfo : MessageLatency : MessageLatencyType : None EventData : How can I collect the NDRs in a single mailbox for review? I have already set the following command but it is of no effect [PS] C:\>set-TransportConfig -JournalingReportNdrTo [email protected] -ExternalPostmasterAddress [email protected]

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Automating the Backup of a SQL Server 2008 Express Database

    - by JaydPage
    Steps Involved: 1) Create a Database Backup Script. 2) Create a Scheduled Task To Run the Backup Script. 1 Create a Database Backup Script. a) Download and install SQL Server Management Studio. This is a free tool available on the Microsoft website. b) Once Management Studio is installed launch it and connect to the SQL server instance that contains the database that you want to back up. c) Right click on the database and then in the menu choose Tasks -> Back up... d) This will open up a window where you can choose your backup options, once you are happy with the options click on the "Script" button near the top and select the "Script Action to File" option. e) Save the File. 2 Create a Schedule Task to Run the Backup Script a) Open up Windows Task Scheduler. b) Create a new Task using the wizard, when asked to select a program browse to C:\Program Files\Microsoft SQL Server\100\Tools\binn\SQLCMD.exe c) There are 2 arguments that need to be set: -S \SERVER_INSTANCE_NAME  -i "PATH_OF_SQLBACKUP_SCRIPT" where SERVER_INSTANCE_NAME  is the name of the instance of SQL server that contains your database e.g. (local) and PATH_OF_SQLBACKUP_SCRIPT is the path of your backup script e.g. "C:\Program Files\Microsoft SQL Server\DatastoreBackup.sql" d) Adjust the task to run at the desired times and you are done.

    Read the article

  • recommendation for configuration for a multi-core guestOS

    - by reidLinden
    Hi there, I've just received an upgraded Host machine, and am looking to push some of those advances to my workstations Guest OS(s). In particular, I used to have a single processor, with 2 cores, so my guestOS only had 1/1. Now, I've got a single processor with 8 cores, so I'm curious about what would be recommended for my GuestOS now? 1 processor/4 cores? 2 processors/2Cores? 4 processors/1 core? My instinct says to stick with the number of physical processors (or less), but, is that based on reality? I spent a good while looking for an answer to this, but perhaps my google-karma isn't in my favor today. Suggestions?

    Read the article

  • Semantic coupling vs. large class

    - by user106587
    I have hardware I communicate with via TCP. This hardware accepts ~40 different commands/requests with about 20 different responses. I've created a HardwareProxy class which has a TcpClient to send and receive data. I didn't like the idea of having 40 different methods to send the commands/requests, so I started down the path of having a single SendCommand method which takes an ICommand and returns an IResponse, this results in 40 different SpecificCommand classes. The problem is this requires semantic coupling, i.e. the method that invokes SendCommand receives an IResponse which it has to downcast to SpecificResponse, I use a future map which I believe ensures the appropriate SpecificResponse, but I get the impression this code smells. Besides the semantic coupling, ICommand and IResponse are essentially empty abstract classes (Marker Interfaces) and this seems suspicious to me. If I go with the 40 methods I don't think I have broken the single responisbility principle as the responsibility of the HardwareProxy class is to act as the hardware, which has all of these commands. This route is just ugly, plus I'd like to have Asynchronous versions, so there'd be about 80 methods. Is it better to bite the bullet and have a large class, accept the coupling and MarkerInterfaces for a smaller soultuion, or am I missing a better way? Thanks.

    Read the article

  • My windows keyboard is being "clever" with the quote keys - how can I stop it?

    - by Marcin
    I'm using windows 7 on a laptop. On the laptop keyboard, for some reason, the quote key (which has both double and single quote on it) is doing some "clever" annoying things: When I press single-quote (or double-quote), windows doesn't send any characters until I press it twice (resulting in '' or "") When I press it before a vowel, I get some kind of accented character. As I usually only write English, this is annoying. The backtick/tilde key is subject to similar behaviour. I have not attempted to set up my computer to process anything other than English. My keyboard appears to be (in so far as these things are standard on laptops) a standard US qwerty keyboard. How can I stop this happening?

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • Alternatives to type casting in your domain

    - by Mr Happy
    In my Domain I have an entity Activity which has a list of ITasks. Each implementation of this task has it's own properties beside the implementation of ITask itself. Now each operation of the Activity entity (e.g. Execute()) only needs to loop over this list and call an ITask method (e.g. ExecuteTask()). Where I'm having trouble is when a specific tasks' properties need to be updated. How do I get an instance of that task? The options I see are: Get the Activity by Id and cast the task I need. This'll either sprinkle my code with: Tasks.OfType<SpecificTask>().Single(t => t.Id == taskId) or Tasks.Single(t => t.Id == taskId) as SpecificTask Make each task unique in the whole system (make each task an entity), and create a new repository for each ITask implementation I don't like either option, the first because I don't like casting: I'm using NHibernate and I'm sure this'll come back and bite me when I start using Lazy Loading (NHibernate currently uses proxies to implement this). I don't like the second option because there are/will be dozens of different kind of tasks. Which would mean I'd have to create as many repositories. Am I missing a third option here? Or are any of my objections to the two options not justified? How have you solved this problem in the past?

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

  • Infected Win XP disk not openable on a Vista computer

    - by Retired57
    How can I read a WinXP partioned disk that is infected and shows up on a USB connection to Win Vista computer as a single partition that Vista cannot see the contents? The XP computer HDD is partioned into 4 partitions. It became infected, and all attempts to clean it have failed. Applications begin to launch, but are then shut down by the infecting agent. Using a major Anti-virus Co. boot disk (which was unable to connect to the Web, probably because the infecting agent stopped it) with virus definitions dated after the disk became infected, the resultant scan showed no infection. I bought a USB cable to connect the IDE drive to my Vista computer, but when I open Win Explorer, it sees the disk, but does not show any contents. It indicates it is a single partition that is valid. However in all the ways I have tried it does not show drive contents. Any suggestions on what to do next?

    Read the article

  • For an ORM supporting data validation, should constraints be enforced in the database as well?

    - by Ramnique Singh
    I have always applied constraints at the database level in addition to my (ActiveRecord) models. But I've been wondering if this is really required? A little background I recently had to unit test a basic automated timestamp generation method for a model. Normally, the test would create an instance of the model and save it without validation. But there are other required fields that aren't nullable at the in the table definition, meaning I cant save the instance even if I skip the ActiveRecord validation. So I'm thinking if I should remove such constraints from the db itself, and let the ORM handle them? Possible advantages if I skip constraints in db, imo - Can modify a validation rule in the model, without having to migrate the database. Can skip validation in testing. Possible disadvantage? If its possible that ORM validation fails or is bypassed, howsoever, the database does not check for constraints. What do you think? EDIT In this case, I'm using the Yii Framework, which generates the model from the database, hence database rules are generated also (though I could always write them post-generation myself too).

    Read the article

  • Multiple WAPs: Bandwidth, Frequency Considerations

    - by Pete Cresswell
    The router in my LAN closet does 2 and 5 GHz. In the kitchen, I have a single-band 2 GHz WAP, and in the garden shed I have another single-band 2 GHz WAP. All are set to Bandwidth = 40 MHz, Wireless Network Mode = N-Only. The kitchen WAP and the LAN closet router both come up with multiple bars on my smart phone from almost anywhere in the house. The garden shed WAP will register one bar... but only sometimes. The Questions: Are these things in danger of butting heads? Should I re-set them to Bandwidth = 20 MHz? Bandwidth = Auto? Are there any tools that I could use on an Android smart phone, iPod, or WiFi-enabled laptop to make my own analysis?

    Read the article

  • Making audio CDs en mass - Linux based solutions?

    - by The Journeyman geek
    My mom's sings and gives away cds to people. Invariably it falls to me to have to burn cds for her, and burning 50-100 cds on a single drive is a pain. I DO have a handful of cd burners and a slightly geriatric old PIII 450. This is what i want to be able to do - either point an application at a folder of WAV or MP3s, say how many copies i need on CLI (since then i can SSH into the system and use it headless) feed 2 or more CD burners cds until its done, OR pop in a single CD into a master drive and have its contents duplicated to 2 or more burners. I'd rather have it running on linux, be command line based, and be as little work as possible - almost automatic short of telling it how many copies i want would be ideal. I'm sure i'll have people wondering about legality - My mom sings her own music, and its classical, and older than copyright law, so, that's a non issue. I just want a way to make this chore a little easier, short of telling my mom to do it herself.

    Read the article

  • Complex string matching with fuzzywuzzy

    - by That1Guy
    I'm attempting to write a process that matches obscure strings to a single 'master string' for further processing. I have a lot of data that looks something like this: Basketball Basket Ball Football BasketBallR BBall BBall - r FootB ...and so on. These need to be mapped to a master record like so: Basketball = Basket Ball, BBall Basketball - R = BasketBallR, BBall - r I also have instances of data resembling this format: Football -r FootBall - r-g/H,Q,HH These situations need to be separated into different categories before being mapped. For example FootBall - r-g/H,Q,HH should be: Football - r Football - g Football - H Football - Q Football - HH At this point, it still needs to be mapped to a master record... I've tried several different combinations of fuzzywuzzy matching methods, Levenshtein Distance measurements, regex, etc. and can't seem to find a reliable method to logically associate different naming styles of a single item with a master name. I'm throwing my hands up in desperation. Are there any existing python resources than can help sort out my problem? Are there other options? Can anybody point out an obvious option that I might have overlooked? Basically, any suggestion, solution, resource or alternative method is greatly appreciated.

    Read the article

  • How to disable Spotlight content indexing in Mac OS

    - by o.v.
    From Windows experience, I could always elect Live search to only index file names not their content. Is this something that can be done with Spotlight on a Mac? It used to index absolutely everything, for instance it would return a bunch of video files for any obscure character combination typed into the search field. Right now I've disabled Spotlight entirely as per this answer, but it seems to have disabled searching altogether. For instance, Finder is yet to locate any .pdf files in a small directory as I'm typing this question (unlike windows search which would still be able to work even with indexing disabled) Alternatively, if there is any way (including a trusted third-party app) that will index file names and metadata e.g. ID3 tags that would likely be the preferred option.

    Read the article

  • Using a CDN for CMS software (multiple sites)

    - by SmokeyPHP
    I'm currently researching ideas for the media management side of a CMS I'm writing. I was looking at having images served from a CDN which is fine on a single site, but I want all sites that run the CMS to make use of a CDN (which will most likely be a custom developed one, rather than a third party service like S3). My main question is: Is a multi-site CDN a good idea? I can't think of a downside, but have probably missed something - obviously they won't share the same folder, as I invisage the requests to be css.cdnsite.com/example.com/style.css or something along those lines. Having multiple sites in the same place will obviously make it easier for us to manage, as well as being cheaper, but then I wonder if it'll be worth it... Long story short: How should the CMS handle user uploaded media (separate installations) Just keep a local copy of all assets and serve them from the same site, like in days of yore? Keep a local copy, force site to use www. and have CDN subdomains per site? Or use a single separate CDN for all sites? Apologies for the length of this question, not sure if this should be multiple questions or not, as all parts are kind of related and could affect each other.

    Read the article

  • Enable a program in Windows to run multiple times?

    - by user135490
    I've got this legacy software that only allows you to run one copy at a time, it detects that you have another session opened and it won't allow you to open a second instance. The problem is this is a cpu intensive program and it only use a single core. Is there any hacks or tweaks so I can trick it and open more than one instance? This would allow me to retire about 5 servers... I'm using Windows 2008 R2. I had to use cff explorer to enable the use more than 2GB RAM as the program crashes when it tries to use more than 2GB.

    Read the article

  • MySQL slow query log logging all queries

    - by Blanka
    We have a MySQL 5.1.52 Percona Server 11.6 instance that suddenly started logging every single query to the slow query log. The long_query_time configuration is set to 1, yet, suddenly we're seeing every single query (e.g. just saw one that took 0.000563s!). As a result, our log files are growing at an insane pace. We just had to truncate a 180G slow query log file. I tried setting the long_query_time variable to a really large number to see if it stopped altogether (1000000), but same result. show global variables like 'general_log%'; +------------------+--------------------------+ | Variable_name | Value | +------------------+--------------------------+ | general_log | OFF | | general_log_file | /usr2/mysql/data/db4.log | +------------------+--------------------------+ 2 rows in set (0.00 sec) show global variables like 'slow_query_log%'; +---------------------------------------+-------------------------------+ | Variable_name | Value | +---------------------------------------+-------------------------------+ | slow_query_log | ON | | slow_query_log_file | /usr2/mysql/data/db4-slow.log | | slow_query_log_microseconds_timestamp | OFF | +---------------------------------------+-------------------------------+ 3 rows in set (0.00 sec) show global variables like 'long%'; +-----------------+----------+ | Variable_name | Value | +-----------------+----------+ | long_query_time | 1.000000 | +-----------------+----------+ 1 row in set (0.00 sec)

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >