Search Results

Search found 13675 results on 547 pages for 'online repository'.

Page 35/547 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Online Password Security Tactics

    - by BuckWoody
    Recently two more large databases were attacked and compromised, one at the popular Gawker Media sites and the other at McDonald’s. Every time this kind of thing happens (which is FAR too often) it should remind the technical professional to ensure that they secure their systems correctly. If you write software that stores passwords, it should be heavily encrypted, and not human-readable in any storage. I advocate a different store for the login and password, so that if one is compromised, the other is not. I also advocate that you set a bit flag when a user changes their password, and send out a reminder to change passwords if that bit isn’t changed every three or six months.    But this post is about the *other* side – what to do to secure your own passwords, especially those you use online, either in a cloud service or at a provider. While you’re not in control of these breaches, there are some things you can do to help protect yourself. Most of these are obvious, but they contain a few little twists that make the process easier.   Use Complex Passwords This is easily stated, and probably one of the most un-heeded piece of advice. There are three main concepts here: ·         Don’t use a dictionary-based word ·         Use mixed case ·         Use punctuation, special characters and so on   So this: password Isn’t nearly as safe as this: P@ssw03d   Of course, this only helps if the site that stores your password encrypts it. Gawker does, so theoretically if you had the second password you’re in better shape, at least, than the first. Dictionary words are quickly broken, regardless of the encryption, so the more unusual characters you use, and the farther away from the dictionary words you get, the better.   Of course, this doesn’t help, not even a little, if the site stores the passwords in clear text, or the key to their encryption is broken. In that case…   Use a Different Password at Every Site What? I have hundreds of sites! Are you kidding me? Nope – I’m not. If you use the same password at every site, when a site gets attacked, the attacker will store your name and password value for attacks at other sites. So the only safe thing to do is to use different names or passwords (or both) at each site. Of course, most sites use your e-mail as a username, so you’re kind of hosed there. So even though you have hundreds of sites you visit, you need to have at least a different password at each site.   But it’s easier than you think – if you use an algorithm.   What I’m describing is to pick a “root” password, and then modify that based on the site or purpose. That way, if the site is compromised, you can still use that root password for the other sites.   Let’s take that second password: P@ssw03d   And now you can append, prepend or intersperse that password with other characters to make it unique to the site. That way you can easily remember the root password, but make it unique to the site. For instance, perhaps you read a lot of information on Gawker – how about these:   P@ssw03dRead ReadP@ssw03d PR@esasdw03d   If you have lots of sites, tracking even this can be difficult, so I recommend you use password software such as Password Safe or some other tool to have a secure database of your passwords at each site. DO NOT store this on the web. DO NOT use an Office document (Microsoft or otherwise) that is “encrypted” – the encryption office automation packages use is very trivial, and easily broken. A quick web search for tools to do that should show you how bad a choice this is.   Change Your Password on a Schedule I know. It’s a real pain. And it doesn’t seem worth it…until your account gets hacked. A quick note here – whenever a site gets hacked (and I find out about it) I change the password at that site immediately (or quit doing business with them) and then change the root password on every site, as quickly as I can.   If you follow the tip above, it’s not as hard. Just add another number, year, month, day, something like that into the mix. It’s not unlike making a Primary Key in an RDBMS.   P@ssw03dRead10242010   Change the site, and then update your password database. I do this about once a month, on the first or last day, during staff meetings. (J)   If you have other tips, post them here. We can all learn from each other on this.

    Read the article

  • NHibernate.MappingException (no persister for) weirdness

    - by Berryl
    The weird part being that I have other tests that validate the mapping and even the method being called (Nhib session.SaveOrUpdate) that run just fine. The entire exception is below. Here is some debug output from a test that does work: Item type: Domain.Model.Projects.Project item: 007-00-056 ATM Machine Replacement Is transient: True Id: 0 NHibernate: INSERT INTO Projects (Code, Description) VALUES (@p0, @p1); select insert_rowid();@p0 = '007-00-056', @p1 = 'ATM Machine Replacement' Here is the same debug output before the exception: Item type: Smack.ConstructionAdmin.Domain.Model.Projects.Project item: 006-00-023 Refinish Casino Chairs Is transient: True Id: 0 The two tests are different in that the one that works is just testing the repository, and saving in memory test data. The failing one is saving data that has been converted from a legacy db (which has it's own session). The repository is also a replacement design for a different IProjectRepsitory that worked fine doing this, so the new repository is also a likely suspect here. Does anyone see what I'm missing or have some questions to narrow it down? Cheers, Berryl === the Exception trace ===== failed: NHibernate.MappingException : No persister for: Domain.Model.Projects.Project at NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName) at NHibernate.Impl.SessionImpl.GetEntityPersister(String entityName, Object obj) at NHibernate.Event.Default.AbstractSaveEventListener.SaveWithGeneratedId(Object entity, String entityName, Object anything, IEventSource source, Boolean requiresImmediateIdAccess) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.SaveWithGeneratedOrRequestedId(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveEventListener.SaveWithGeneratedOrRequestedId(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.EntityIsTransient(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveEventListener.PerformSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.OnSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.FireSave(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.Save(Object obj) NHibernate\Repository\NHibRepository.cs(40,0): at Core.Data.NHibernate.Repository.NHibRepository`1.Add(T item) Repositories\ProjectRepository.cs(30,0): at Data.Repositories.ProjectRepository.SaveAll(IEnumerable`1 projects) LegacyConversion\LegacyBatchUpdater.cs(20,0): at Data.LegacyConversion.LegacyBatchUpdater.ConvertOpenLegacyProjects(ILegacyProjectDao legacyProjectDao, IProjectRepository greenProjectRepository) Data\Brownfield\ProjectBatchUpdate_SQLiteTests.cs(31,0): at .Tests.Data.Brownfield.ProjectBatchUpdate_SQLiteTests.Test()

    Read the article

  • Mercurial outgoing Hook

    - by Tom Bell
    I'm looking to create a Mercurial hook that pushes to a backup remote repository when I push to a local repository. I thought I could hook the 'outgoing' hook, but this creates a infinite loop that isn't pretty. So is there like a post-push hook, or would it be best to have the repository I am pushing to have an 'incoming' hook to push the to the remote backup instead?

    Read the article

  • DDD: Getting aggregate roots for other aggregates

    - by Ed
    I've been studying DDD for the past 2 weeks, and one of the things that really stuck out to me was how aggregate roots can contain other aggregate roots. Aggregate roots are retrieved from the repository, but if a root contains another root, does the repository have a reference to the other repository and asks it to build the subroot?

    Read the article

  • SVN and VSS sync

    - by Rodrigo
    Can I sync files from SVN to VSS automatically?. My personal repository is SVN and my client hava a VSS repository. I'll would to like sync the repository throught scripts or something like that. Can I? Thanks

    Read the article

  • ASP.NET MVC Unit Testing Controllers - Repositories

    - by Brian McCord
    This is more of an opinion seeking question, so there may not be a "right" answer, but I would welcome arguments as to why your answer is the "right" one. Given an MVC application that is using Entity Framework for the persistence engine, a repository layer, a service layer that basically defers to the repository, and a delete method on a controller that looks like this: public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } I am looking for the proper way to Unit Test this. Currently, I have a fake repository that gets used in the service, and my unit test looks like this: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange var stateService = GetService(); var stateController = new StateController( stateService ); ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; stateController = new StateController( stateService ); var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Is this overkill? Do I need to check to see if the record actually got deleted (as I already have Service and Repository tests that verify this)? Should I even use a fake repository here or would it make more sense just to mock the whole thing? The examples I'm looking at used this model of doing things, and I just copied it, but I'm really open to doing things in a "best practices" way. Thanks.

    Read the article

  • Setting up 2 or more Repositories?

    - by user364133
    My question is: can i have 2 repositories without losing my original repository. Lets say i want the the eclair source repository repo init -u git://android.git.kernel.org/platform/manifest.git -b eclair (already synced and working) and i would also like to sync with cyanogens repository repo init -u git://github.com/cyanogen/android.git -b eclair All i basically want to do is have both repositories without altering or messing up the original. thanks.

    Read the article

  • Git - post-receive hook with git pull "Failed to find a valid git directory"

    - by ludicco
    It's very weird but when setting a git repository and creating a post-receive hook with: echo "--initializing hook--" cd ~/websites/testing echo "--prepare update--" git pull echo "--update completed--" the hook runs indeed, but it never manage to run git pull properly: 6bfa32c..71c3d2a master -> master --initializing hook-- --prepare update-- fatal: Not a git repository: '.' Failed to find a valid git directory. --update completed-- so I'm asking myself now, how it's possible to make the hook update the clone with post-receive? in this case the user running the processes is the same, and its everything inside the user folder so I really don't understand...because if if I go manually into cd ~/websites/testing git pull it works without any problem... any help on that would be pretty much appreciated Thanks a lot

    Read the article

  • Merging Two Git Repositories with branches

    - by Joel K
    I realize there's a Stack Overflow question: http://stackoverflow.com/questions/277029/combining-multiple-git-repositories But I haven't found git-stitch-repo to be quite the tool I'm looking for. I also consider this more of a sysadmin task. How do I take code from an external repository and combine it with code from a primary repository while maintaining history/diffs and branches. Use case: An outside development team using SVN has ported to git and now wants to 'merge' their code in to the main company's git repo. I've tried subtree merges, but I lose the history. I've tried git-stitch-repo, but that process results in an entirely new repo that's missing branches. I just want to slot in some outside code as a sub-directory in our current main repo with as little disruption as possible and while maintaining the other project's history. Any success stories out there?

    Read the article

  • Setting up Gitosis, where to create the repos?

    - by ReynierPM
    I'm trying to setup Gitosis on CentOS 6.2 but have some doubts/problems about it. I read this docs here, here and here but it's unclear to me where to configure where repositories are created. My server has a partition /data where I create a directory and called /gitrepos. I want all the repos created under that directory. By default if I run the command: gitosis-init < /home/reynierpm/reynierpm.pub I get this Initialized empty Git repository in /root/repositories/gitosis-admin.git/ Reinitialized existing Git repository in /root/repositories/gitosis-admin.git/ And I want this repos created under /data/gitrepos, any help? Thanks in advance

    Read the article

  • How do I deploy a charm from a local repository?

    - by Matt McClean
    I am trying to run the Charm tutorial from the juju documentation by creating a new charm from a local repository. I started by installing the charms from bzr to my local ubuntu 12.04 desktop running in a virtual machine. The new file structure is the following: ubuntu@ubuntu-VirtualBox:~$ find charms/precise/drupal/ charms/precise/drupal/ charms/precise/drupal/hooks charms/precise/drupal/hooks/db-relation-changed charms/precise/drupal/hooks/install charms/precise/drupal/hooks/start charms/precise/drupal/hooks/stop charms/precise/drupal/metadata.yml charms/precise/drupal/README When I install the mysql charm, which was downloaded from the remote charm repository, it works fine. However when I run the following command to deploy the new charm it fails with the following error message: ubuntu@ubuntu-VirtualBox:~$ juju deploy --repository=charms local:precise/drupal 2012-05-09 10:01:05,671 INFO Searching for charm local:precise/drupal in local charm repository: /home/ubuntu/charms 2012-05-09 10:01:05,845 WARNING Charm '.mrconfig' has an error: CharmError() Error processing '/home/ubuntu/charms/precise/.mrconfig': unable to process /home/ubuntu/charms/precise/.mrconfig into a charm Charm 'local:precise/drupal' not found in repository /home/ubuntu/charms 2012-05-09 10:01:06,217 ERROR Charm 'local:precise/drupal' not found in repository /home/ubuntu/charms Is there some file missing in the drupal charm directory that juju needs to make the charm valid? Also, I get the file processing error for the .mrconfig file also when deploying the mysql charm so is there something I need to change there perhaps?

    Read the article

  • yum not working on EC2 Red Hat instance: Cannot retrieve repository metadata

    - by adev3
    For some reason yum has stopped working in my Amazon EC2 instance, located in the EU West sector. There seems to be something wrong with the path of the repo metadata, is this correct? I would be very grateful for any help, as my experience in this field is somewhat limited. Thank you very much. cat /etc/redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago) yum repolist: Loaded plugins: amazon-id, rhui-lb, security https://rhui2-cds01.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. https://rhui2-cds02.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. repo id repo name status rhui-eu-west-1-client-config-server-6 Red Hat Update Infrastructure 2.0 Client Configuration Server 6 0 rhui-eu-west-1-rhel-server-releases Red Hat Enterprise Linux Server 6 (RPMs) 0 rhui-eu-west-1-rhel-server-releases-optional Red Hat Enterprise Linux Server 6 Optional (RPMs) 0 repolist: 0 yum update: (I needed to remove the base URLs below because of ServerFault's restrictions for new users) Loaded plugins: amazon-id, rhui-lb, security [same as base url 1 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. [same as base url 2 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhui-eu-west-1-client-config-server-6. Please verify its path and try again

    Read the article

  • Mirror a Dropbox repository in Sharepoint and restrict access

    - by Dan Robson
    I'm looking for an elegant way to solve the following problem: My development team uses Dropbox for sharing documents amongst our immediate group. We'd like to put some of those documents into a SharePoint repository for the larger group to be able to access, as granting Dropbox access to the group at large is not ideal. However, we'd like to continue to be able to propagate changes to the SharePoint site simply by updating the files in Dropbox on our local client machines, and also vice versa - users granted access on SharePoint that update files in that workspace should be able to save their files and the changes should appear automatically on our client PC's. I've already done the organization of the folders so that in Dropbox, there exists a SharePoint folder that looks something like this: SharePoint ----Team --------Restricted Access Folders ----Organization --------Open Access Folders The Dropbox master account and the SharePoint master account are both set up on my file server. Unfortunately, Dropbox doesn't seem to allow syncing of folders anywhere above the \Dropbox\ part of the file system's hierarchy - or all I would have to do is find where the Sharepoint repository is maintained locally, and I'd be golden. So it seems I have to do some sort of 2-way synchronization between the Dropbox folder on the file server and the SharePoint folder on the file server. I messed around with Microsoft SyncToy, but it seems to be lacking in the area of real-time updating - and as much as I love rsync, I've had nothing but bad luck with it on Windows, and again, it has to be kicked off manually or through Task Scheduler - and I just have a feeling if I go down that route, it's only a matter of time before I get conflicts all over the place in either Dropbox, SharePoint, or both. I really want something that's going to watch both folders, and when one item changes, the other automatically updates in "real-time". It's quite possible I'm going down the entirely wrong route, which is why I'm asking the question. For simplicity's sake, I'll restate the goal: To be able to update Dropbox and have it viewable on the SharePoint site, or to update the SharePoint site and have it viewable in Dropbox. And since I'm a SharePoint noob, I'll also need help hiding the "Team" subfolder from everyone not in a specific group in AD.

    Read the article

  • Managing Internal Yum Repository Groups

    - by elmt
    What is the best method for handling yum groups dependencies? For example, take this comps.xml file <comps> <group> <id>production</id> <name>Production</name> <default>true</default> <description>Packages required to run</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">ssh</packagereq> </packagelist> </group> <group> <id>development</id> <name>Development</name> <default>false</default> <description>Packages required to develop</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">gcc</packagereq> </packagelist> </group> </comps> which is packaged with createrepo -g comps.xml x86_64. The ssh and gcc rpms are not installed in the x86_64 directory. If I run yum groupinstall development, yum is smart enough to pull the gcc package from the RHEL repo even though the groups are defined in my internal repository. However, is this the proper way of doing this, or should I copy the rpms to my local repository and recreate the repo?

    Read the article

  • Automatically update SVN repository on another server

    - by Mikey C
    We have 2 Ubuntu web servers, one of which is our staging server (Staging) and the other is our live server (Live). Staging has our Subversion repository, as well as the latest version of our sites on it. Because the SVN server is running on Staging, I've added post-commit hook scripts so that the staging server automatically has the latest code. Easy. However, I'd like one of the repositories on Live to also stay updated. This is a repository of images, PDFs and suchlike. When a team member commits to this, I'd like it to automatically update on the live servers so it can be used in mailings, content managed pages etc. I'd add something to the post-commit to SSH across and update, but for security, we can only SSH from one server to another as user 'commandLine', whereas the 'www-data' user runs the post-commit. I'd rather not run a cron on Live to update every 5 minutes, but I can't see another way of doing it without altering all our user permissions. Any ideas?

    Read the article

  • Microsoft’s 22tracks Music Service now Available in All Browsers

    - by Akemi Iwaya
    Are you tired of listening to the same old music and looking for something new to listen to? Then 22tracks from Microsoft is definitely worth a look! This online music service is available in your favorite browser, does not require an account to use, and lets you listen to music from multiple international sources! If you are curious about 22tracks, then the following excerpt and video sum up the service very nicely. From the blog post: The concept behind 22tracks is simple: 22 local top DJs from cities like Amsterdam, Brussels, London and Paris share their genre’s 22 hottest tracks of the moment. Each city boosts its own team of specialized DJs bringing you the newest tracks in their genre. When you get ready to select (or change to) another set of tracks, just click on the desired city at the top of the browser window, then click on the appropriate set from the drop-down list. 22tracks Homepage 22tracks and Internet Explorer team up to bring you a completely new online music experience [22tracks Blog] 22tracks about [YouTube] [via BetaNews and The Next Web]

    Read the article

  • Free Online Performance Tuning Event

    - by Andrew Kelly
      On June 9th 2010 I will be showing several sessions related to performance tuning for SQL Server and they are the best kind because they are free :).  So mark your calendars. Here is the event info and URL: June 29, 2010 - 10:00 am - 3:00 pm Eastern SQL Server is the platform for business. In this day-long free virtual event, well-known SQL Server performance expert Andrew Kelly will provide you with the tools and knowledge you need to stay on top of three key areas related to peak performance...(read more)

    Read the article

  • Another Questionable Article Online…

    - by Jonathan Kehayias
    At the beginning of the month I blogged about my thoughts on the virtualization feedback provided by SSWUG’s newsletter , and Rich responded with some information on how the incorrect information lead him to making incorrect conclusions.  It seems like every couple of weeks an article, tip, newsletter, whatever is posted by or on a major site that has questionable if not outright incorrect material in it.  Last week MSSQLTips posted SQL Server tempdb one or multiple data files in which...(read more)

    Read the article

  • Lancement de la plateforme Microsoft Online Services : testez-la et venez en discuter avec Microsoft

    [IMG]http://www.lgmorand.com/blog/image.axd?picture=2010%2f3%2fhome_header-bg+-+Copie.jpg[/IMG] Après le lancement de sa plateforme Azure en début d'année, Microsoft a lancé début mars sa nouvelle plateforme MOS, pour Microsoft Office Services, une plateforme d'outils de communication externalisés mais restants au service de l'entreprise. Il s'agit un service destiné aux professionnels uniquement qui permet de confier certaines fonctions à Microsoft : messagerie collaborative (Exchange), travail collaboratif (Sharepoint), communications temps réel (Office Communications, Live Meeting, Communicator) et bureautique (Office).

    Read the article

  • Watch Google’s I/O 2012 Developer Conference Live (Online) Starting June 27

    - by Asian Angel
    Google’s annual I/O conference begins on Wednesday this week and will be filled with exciting sessions about Android, Chrome, Google+, and more. To help you keep up with all the fun we have the links you need so that you can tune in with live streaming! Photo courtesy of Google I/O website. The keynote for Day 1 will begin at 9:30 a.m. PDT (U.S. time) and the keynote for the second day will begin at 10:00 a.m. PDT (U.S. time), so make sure to mark it on your schedule! Visit the blog post linked below for more details about signing up for Extended Events, the I/O mobile app, the liveblogging gadget, and more. SPECIAL NOTE: The Google blog post linked below was slightly ambiguous and listed both of the I/O URLs we have shown here, so make sure to keep a watch on both… How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >