Search Results

Search found 23062 results on 923 pages for 'multiple models'.

Page 275/923 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Why do I recieve multiple warnings of "No running instance of xfce4-panel was found" when logging into Xubuntu?

    - by Fredrik
    I'm running Xubuntu 11.04, the bootup-time is quite fast but when I log in it takes close to a minute before the desktop is displayed, meanwhile I see no activity on the hard drive. When I finally have the desktop I see this notification repeated 10 times: and then this one: In .config/autostart I have these entries $ ls xfce4-settings-helper-autostart.desktop xfce4-clipman-plugin-autostart.desktop xfce-panel.desktop $ cat xfce-panel.desktop [Desktop Entry] Encoding=UTF-8 Version=0.9.4 Type=Application Name=xfce4-panel Comment= Exec=xfce4-panel StartupNotify=false Terminal=false Hidden=false I need some assistance to locate the slow startup, which logs to look at etc. And then this annoying message about xfce-panel. Where do I look for from where it is started.

    Read the article

  • How to "DRY up" C# attributes in Models and ViewModels?

    - by DanM
    This question was inspired by my struggles with ASP.NET MVC, but I think it applies to other situations as well. Let's say I have an ORM-generated Model and two ViewModels (one for a "details" view and one for an "edit" view): Model public class FooModel // ORM generated { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string EmailAddress { get; set; } public int Age { get; set; } public int CategoryId { get; set; } } Display ViewModel public class FooDisplayViewModel // use for "details" view { [DisplayName("ID Number")] public int Id { get; set; } [DisplayName("First Name")] public string FirstName { get; set; } [DisplayName("Last Name")] public string LastName { get; set; } [DisplayName("Email Address")] [DataType("EmailAddress")] public string EmailAddress { get; set; } public int Age { get; set; } [DisplayName("Category")] public string CategoryName { get; set; } } Edit ViewModel public class FooEditViewModel // use for "edit" view { [DisplayName("First Name")] // not DRY public string FirstName { get; set; } [DisplayName("Last Name")] // not DRY public string LastName { get; set; } [DisplayName("Email Address")] // not DRY [DataType("EmailAddress")] // not DRY public string EmailAddress { get; set; } public int Age { get; set; } [DisplayName("Category")] // not DRY public SelectList Categories { get; set; } } Note that the attributes on the ViewModels are not DRY--a lot of information is repeated. Now imagine this scenario multiplied by 10 or 100, and you can see that it can quickly become quite tedious and error prone to ensure consistency across ViewModels (and therefore across Views). How can I "DRY up" this code? Before you answer, "Just put all the attributes on FooModel," I've tried that, but it didn't work because I need to keep my ViewModels "flat". In other words, I can't just compose each ViewModel with a Model--I need my ViewModel to have only the properties (and attributes) that should be consumed by the View, and the View can't burrow into sub-properties to get at the values. Update LukLed's answer suggests using inheritance. This definitely reduces the amount of non-DRY code, but it doesn't eliminate it. Note that, in my example above, the DisplayName attribute for the Category property would need to be written twice because the data type of the property is different between the display and edit ViewModels. This isn't going to be a big deal on a small scale, but as the size and complexity of a project scales up (imagine a lot more properties, more attributes per property, more views per model), there is still the potentially for "repeating yourself" a fair amount. Perhaps I'm taking DRY too far here, but I'd still rather have all my "friendly names", data types, validation rules, etc. typed out only once.

    Read the article

  • Analiytics: Can I set a goal on multiple events?

    - by David Parks
    We have a popup dialogue that requests users email address or facebook login. The page behind the popup loads, so a page view is counted. We want to measure: How many users ignored the popup completely How many users engaged the popup, but don't complete the process (we trigger an event when the user performs actions defined as "engaging") How many users completed the popup Bounce rates aren't telling because some users won't receive the popup. We are basically triggering events "PopupDisplayed" "PopupEngaged" and "PopupComplete", with labels to differentiate between email and facebook. But I don't think I can set goals to count "Users who received 'PopupDisplayed' AND 'PopupComplete'" events, so I can count how many users both saw the popup and completed it.

    Read the article

  • Should I use the same AddThis tag on multiple sites?

    - by ripper234
    I have an AddThis for one site: <script type="text/javascript" src="http://s7.addthis.com/js/250/addthis_widget.js#pubid=ripper234"> </script> Now I logged into AddThis and wanted to get my tag again, I saw it changed: <script type="text/javascript" src="http://s7.addthis.com/js/300/addthis_widget.js#pubid=ripper234"> </script> Should I use the same tag I got before, or the new tag? What's the difference? Is 250/300 the internal version number?

    Read the article

  • Ubuntu One Sync for multiple folders, not just the Ubuntu One folder.

    - by bisi
    Hello, I may have misunderstood Ubuntu One as a service, but this is how I had pictured it. At home, I tagged a few folders with the "Sync to Ubuntu One" tick and it started uploading. Now back at work, on Win7, I installed Ubuntu One and thought I was going to be able to tick which of the backed-up folders I could download/sync to this machine. From what I gather after a little research is that whatever I would like to synchronize would need to be in the Ubuntu One folder? There is no way to do this outside of that? Thanks for confirming, or informing me whether this will be introduced in the future as an option? Thank you very much for your help on this! bisi

    Read the article

  • How should I log time spent on multiple tasks?

    - by xenoterracide
    In Joel's blog on evidence based scheduling he suggests making estimates based on the smallest unit of work and logging extra work back to the original task. The problem I'm now experiencing is that I'll have create object A with subtask method A which creates object B and test all of the above. I create tasks for each of these that seems to be resulting in ok-ish estimates (need practice), but when I go to log work I find that I worked on 4 tasks at once because I tweak method A and find a bug in the test and refactor object B all while coding it. How should I go about logging this work? should I say I spent, for example, 2 hours on each of the 4 tasks I worked on in the 8 hour day?

    Read the article

  • Rails: Multiple "has_many through" for the two same models?

    - by neezer
    Can't wrap my head around this... class User < ActiveRecord::Base has_many :fantasies, :through => :fantasizings has_many :fantasizings, :dependent => :destroy end class Fantasy < ActiveRecord::Base has_many :users, :through => :fantasizings has_many :fantasizings, :dependent => :destroy end class Fantasizing < ActiveRecord::Base belongs_to :user belongs_to :fantasy end ... which works fine for my primary relationship, in that a User can have many Fantasies, and that a Fantasy can belong to many Users. However, I need to add another relationship for liking (as in, a User "likes" a Fantasy rather than "has" it... think of Facebook and how you can "like" a wall-post, even though it doesn't "belong" to you... in fact, the Facebook example is almost exactly what I'm aiming for). I gathered that I should make another association, but I'm kinda confused as to how I might use it, or if this is even the right approach. I started by adding the following: class Fantasy < ActiveRecord::Base ... has_many :users, :through => :approvals has_many :approvals, :dependent => :destroy end class User < ActiveRecord::Base ... has_many :fantasies, :through => :approvals has_many :approvals, :dependent => :destroy end class Approval < ActiveRecord::Base belongs_to :user belongs_to :fantasy end ... but how do I create the association through Approval rather than through Fantasizing? If someone could set me straight on this, I'd be much obliged!

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • Is there a pattern or best practice for passing a reference type to multiple classes vs a static class?

    - by Dave
    My .NET application creates HTML files, and as such, the structure looks like variable myData BuildHomePage() variable graph = new BuildGraphPage(myData) variable table = BuildTablePage(myData) BuildGraphPage and BuildTablePage both require access data, the myData object. In the above example, I've passed the myData object to 2 constructors. This is what I'm doing now, in my current project. The myData object, and it's properties are all readonly. The problem is, the number of pages which will require this object has grown. In the real project, there are currently 4, but the new spec is to have about 20. Passing this object to the constructor of each new object and assigning it to a field is a little time consuming, but not a hardship! This poses the question whether it's better practice to continue as I have, or to refactor and create a new static class for myData which can be referenced from any where in my project. I guess my abilities to use Google are poor, because I did try and find an appropriate pattern as I am sure this type of design must be common place but my results returned nothing. Is there a pattern which is suited, or do best practices lean towards one implementation over another.

    Read the article

  • To have multiple sites with similar content on the same server but different IPs?

    - by Eugene
    I have 7 different sites on the same dedicated server. Two major sites have different IPs, and 5 small share the same IP. About first 2 they have similar content, but not the same. Basically they have different: URLs Titles Meta Data But they both have the same niche. I was thinking about two strategies. One is to move one of the sites (from 2 first one) on different server or move 5 other sites on different server. But I'm not sure which way is better. My questions are: Wouldn't be better from SEO stand point, to move those 2 sites on different servers? Does it worth it to spend for additional server? Do you know if Google penalize sites for similar content on the same server with different IPs?

    Read the article

  • Is there a Content Management System that allows multiple & independent blogs to be running on one domain?

    - by Ron
    Hello Webmasters, I am a Wordpress fan, and I'm now building a new site and I'm not sure which CMS can achieve what I'm trying to do. I am building a food blog network for a bunch of cities in the US, and I want to my city pages to be independently running blogs themselves. So basically... Home Page - Its own blog with its own users, talking about Food in general Dallas Page (child of home page) - Its own blog with its own users Chicago Page ..... so on and so forth. The web layout and design will be all the same, but just trying to achieve 25~50 independent blogs on one domain. How can I achieve this? I'm hoping that I don't have to install Wordpress into as many subdomains that I create... Thank you for your help in advance. -RP

    Read the article

  • Why do people have to use multiple versions of jQuery in the same page?

    - by reprogrammer
    I have noticed that sometimes people have to use multiple versions of jQuery in the same page (See question 1 and question 2). I assume people have to carry old versions of jQuery because some pieces of their code is based on an older version of jQuery. Obviously, this approach causes inefficiency. The ideal solution is to refactor the old code to use the newer jQuery API. I wonder if there are tools that automate the process of upgrading a piece of code to use a newer version of jQuery. I've never written programs in in either Javascript or jQuery. So, I'd like to hear from programmers experienced in these language about their opinion on this issue. In particular, I'd like to know the following. How much of problem it is to have to load multiple versions of jQuery? Have you ever had to load multiple versions of any other library in the same page? Do you know of any refactoring tools that helps you migrate your code to use the updated API? Do you think such a refactoring tool is useful? Are you willing to use it?

    Read the article

  • Business operates in multiple counties, will adding a listing in the Local Business sites harm our placement in SERPs?

    - by leeand00
    I work at a non-profit where we operate in more than two counties within our state. Our offices are located in two different towns, and that leaves a few counties of operation where we would also like to appear in their local SERPs or Local Business listings. Please note that these towns are not necessarily close to all the areas of operation. Since we don't have offices in all the counties of operation, how can we effectively post our business in the Local Business Listings and still show up in our counties of operation?

    Read the article

  • What are some arguments AGAINST using EntityFramework?

    - by Rachel
    The application I am currently building has been using Stored procedures and hand-crafted class models to represent database objects. Some people have suggested using Entity Framework and I am considering switching to that since I am not that far into the project. My problem is, I feel the people arguing for EF are only telling me the good side of things, not the bad side :) My main concerns are: We want Client-Side validation using DataAnnotations, and it sounds like I have to create the client-side models anyways so I am not sure that EF would save that much coding time We would like to keep the classes as small as possible when going over the network, and I have read that using EF often includes extra data that is not needed We have a complex database layer which crosses multiple databases, and I am not sure EF can handle this. We have one Common database with things like Users, StatusCodes, Types, etc and multiple instances of our main databases for different instances of the application. SELECT queries can and will query across all instances of the databases, however users can only modify objects that are in the database they are currently working on. They can switch databases without reloading the application. Object modes are very complex and there are often quite a few joins involved Arguments for EF are: Concurrency. I wouldn't have to code in checks to see if the record was updated before each save Code Generation. EF can generate partial class models and POCOs for me, however I am not positive this would really save me that much time since I think we would still need to create the client-side models for validation and some custom parsing methods. Speed of development since we wouldn't need to create the CRUD stored procedures for every database object Our current architecture consists of a WPF Service which handles database calls via parameterized Stored Procedures, POCO objects that go to/from the WCF service and the WPF client, and the WPF client itself which transforms POCOs into class Models for the purpose of Validation and DataBinding.

    Read the article

  • How hard is it to modify the Django Models?

    - by alex
    I am doing geolocation, and Django does not have a PointField. So, I am forced to writing in RAW SQL. GeoDjango, the Django library, does not support the following query for MYSQL databases (can someone verify that for me?) cursor.execute("SELECT id FROM l_tag WHERE\ (GLength(LineStringFromWKB(LineString(asbinary(utm),asbinary(PointFromWKB(point(%s, %s)))))) < %s + accuracy + %s)\ I don't nkow why GeoDjango library cannot do this in MYSQL database. I hate writing RAW SQL for calculating distances between two points. Is there a way I can create my own library for Django that can handle this? If so, how hard is it?

    Read the article

  • How do I detect multiple sprite collisions when there are >10 sprites?

    - by yao jiang
    I making a small program to animate the astar algorithm. If you look at the image, there are lots of yellow cars moving around. Those can collide at any moment, could be just one or all of them could just stupidly crash into each other. How do I detect all of those collisions? How do I find out which specific car has crash into which other car? I understand that pygame has collision function, but it only detects one collision at a time and I'd have to specify which sprites. Right now I am just trying to iterate through each sprite to see if there is collision: for car1 in carlist: for car2 in carlist: collide(car1, car2); This can't be the proper way to do it, if the car list goes to a huge number, a double loop will be too slow.

    Read the article

  • Should I have different models and views for no user data than for some user data?

    - by Sam Holder
    I'm just starting to learn asp.net mvc and I'm not sure what the right thing to do is. I have a user and a user has a collection of (0 or more) reminders. I have a controller for the user which gets the reminders for the currently logged in user from a reminder service. It populates a model which holds some information about the user and the collection of reminders. My question is should I have 2 different views, one for when there are no reminders and one for when there are some reminders? Or should I have 1 view which checks the number of reminders and displays different things? Having one view seems wrong as then I end up with code in my view which says if (Model.Reminders.Count==0){//do something} else {do something else}, and having logic in the view feels wrong. But if I want to have 2 different views then I'd like to have some code like this in my controller: [Authorize] public ActionResult Index() { MembershipUser currentUser = m_membershipService.GetUser(); IList<Reminder> reminders = m_reminderRepository.GetReminders(currentUser); if (reminders.Count == 0) { var model = new EmptyReminderModel { Email = currentUser.Email, UserName = currentUser.UserName }; return View(model); } else { var model = new ReminderModel { Email = currentUser.Email, UserName = currentUser.UserName, Reminders = reminders }; return View(model); } but obviously this doesn't compile as the View can't take both different types. So if I'm going to do this should I return a specific named view from my controller, depending on the emptiness of the reminders, or should my Index() method redirect to other actions like so: [Authorize] public ActionResult Index() { MembershipUser currentUser = m_membershipService.GetUser(); IList<Reminder> reminders = m_reminderRepository.GetReminders(currentUser); if (reminders.Count == 0) { return RedirectToAction("ShowEmptyReminders"); } else { return RedirectToAction("ShowReminders"); } } which seems nicer but then I need to re-query the reminders for the current user in the ShowReminders action. Or should I be doing something else entirely?

    Read the article

  • Why do people have to use multiple versions of jQuery on the same page?

    - by reprogrammer
    I have noticed that sometimes people have to use multiple versions of jQuery in the same page (See question 1 and question 2). I assume people have to carry old versions of jQuery because some pieces of their code is based on an older version of jQuery. Obviously, this approach causes inefficiency. The ideal solution is to refactor the old code to use the newer jQuery API. I wonder if there are tools that automate the process of upgrading a piece of code to use a newer version of jQuery. I've never written programs in in either Javascript or jQuery. So, I'd like to hear from programmers experienced in these language about their opinion on this issue. In particular, I'd like to know the following. How much of problem it is to have to load multiple versions of jQuery? Have you ever had to load multiple versions of any other library in the same page? Do you know of any refactoring tools that helps you migrate your code to use the updated API? Do you think such a refactoring tool is useful? Are you willing to use it?

    Read the article

  • How can I screen clients that try to register multiple times?

    - by Aba Dov
    My company offers a bonus to every client that register. We would like to prevent people from abusing this by registering several times. we thought about filtering clients by ip (there is a problem with workplaces where all stations have the same ip) cookies (if cookies are not allowed we might lose a client) I would like your opinions on these two methods and will be glad to hear about new ones. thanks

    Read the article

  • How do I link external style sheet to multiple pages and folders?

    - by user18681
    im building a pretty large website that will have many pages and folders. I have 1 stylesheet. How do I add the style sheet to "ALL" of these folders? I didnt have this problem before I started to put the pages in SEPERATE folders. Now that each page has its own folder it no longer reads my stylesheet unless its in the SAME folder Example, lets say I have a folder of pets, another of cars, and another of planes. I have to put my stylesheet in EACH and everyone of these folders so that I can see my site. How can I do it so that I do NOT have to put a stylesheet in each and every folder? In other words how can I get my stylesheets on the same page as my folders without having them in that folder? How can I get them to communicate while being in a different folder?

    Read the article

  • How to get multiple open-source projects to use a standard way of doing something.

    - by Marco
    Problem In the last couple weeks, I've used 3 different "repository" tools (listed in alphabetical order): gradle ivy maven I'm calling them "repository" tools because I've also used sbt -- which fortunately uses ivy to manage it's cache or local repository. Each of these tools will create it's own repository. The defaults are: ~/.m2/repository for maven ~/.gradle/cache ~/.ivy2/cache Why can't they all use the same cache? Goal I'd like to change the world so that all three build tools could use the same cache. I'm looking for advice about issues I'm likely to run into and smart ways to get around them. By "use the same cache", I do not mean "retrieve from another build tool's cache". I mean "retrieve from and store in another build tool's cache". While I could go ahead and submit issues to the three projects, I know from experience (as a developer on an open source project), that if you want something done, you're best off getting it done yourself. Also, it seems like I need to get all 3 communities on board to some degree. What is the recommended approach for getting this kind of thing done? How do I approach the different communities? Do I work on patches for the 3 different projects, or would it be better off to create my own "interface" project that deals with these issues and have the 3 tools interface with that? Is this a standards question that I need to address on that front? Lastly, if I'm missing something and this is possible (in an globally configurable fashion), then please let me know.

    Read the article

  • How do you manage frequent software releases to multiple clients?

    - by meeech
    hi we have a cross-platform middleware product which we typically end up customizing/bug fixing on a per client basis. In some cases, providing updates as often as once/twice per week. We have a lot of trouble efficiently managing and releasing the updates to our clients. I've done some digging, but I can't find anything to specifically address this problem. Can anyone share their experiences - how do you deal with this scenario, or do you know of a good software delivery cms? thanks

    Read the article

  • How do create an encrypted system with multiple Linux distributions?

    - by niels
    A few weeks ago I created a completely encrypted system on a notebook and must say I like the idea. It's a little bit annoying to enter the password on every boot, but it's nice to know even if I loose the computer I don't give my data to other people. With the alternate-cd it's easy to do. Now I have to setup a new system where I want to combine the new idea with my usual usage strategy. There I have more partitions: 3 system, Home, Different Data-Partitions for vm-data, photo-data and mp3-data. The background is that I prefer not to update a system. I prefer to install the new version parallel to the old system. So I can easily test it. Obviously the Data-Partitions are used for both systems. My questions is, how can I easily combine both my strategy and the crypto-approach? Or is it impossible. The way to do the crypted stuff by hand is in my eyes to complicated.

    Read the article

  • Multiple 301 redirects, do search engines/viewers see them all?

    - by Karim
    I've put in place lots of different 301 rules to deal with numerous url changes. And for certain URLS there are 3-4 different 301 redirects landing the visitors to the new URL. I heard that 301 loses pagerank/linkjuice. ALl the 301 are onsite for the same domain. With a mix of php 301s and htaccess 301s. so for instance articles/news.php?id=2 --- articles/blog.php?id=2 [filename change] articles/* --- /* [subdir to root] /blog.php?id=2 --- /title-of-post [mod rewrite url change] so if you were to visit /articles/news.php?id=2 there will be two 301 redirects until you land on the /yellow-wellington-boots/, my question is does google see the intermediate redirects, or just the final page the 301's redirect to.

    Read the article

  • How can I keep my privacy while owning multiple domain names?

    - by Abby
    I want to own, create and run x# of domains. I do not want 'whois' to have my name, home address and home phone available to anyone who looks me up. I've already bought a mailbox that I can use for my physical address. But... that doesn't get my name and number question answered. What is the best way to be anonymous yet still be legal? Do I need to incorporate all my sites and get an LLC? Can I create a company name without becoming an LLC? Then there's the phone number.... Thanks in advance to all who respond!

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >