Search Results

Search found 59196 results on 2368 pages for 'time wastrel'.

Page 506/2368 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • setting up freedns with an existing domain

    - by romeovs
    I've been running a webserver off of a pc at a static IP succesfully for the past 5 months. recently however, I've moved into another appartment and my ISP only provides a dynamic IP (my IP changes from time to time). I'm not an internet genius but I was thinking to fix this by using a Dynamic DNS provider. So I got on the web and found freedns. I'm a bit confused about how to set up everything though. I've managed to succesfully install the IP updater daemon on my web server. Then, in my registrars control panel, I set the NS records to point at ns1 through ns4.afraid.org (removing the old NS records). I'm not certain what I should do with the A records though (for now they are still pointing to the old static IP address). I have A records for www, blog, irc, etc. but I cannot point them at my new IP address, because it isn't Could someone explain this in the clearest possible sense (perhaps elaborating on what happens at each step of the DNS process). I never really knew what the A records are for anyway. (note that I haven't really found any documentation at the freedns website, or on google)

    Read the article

  • How do you maintain focus when a particular aspect of programming takes 10+ seconds to complete?

    - by Jer
    I have a very difficult time focusing on what I'm doing (programming-wise) when something (compilation, startup time, etc.) takes more than just a few seconds. Anecdotally it seems that threshold is about 10 seconds (and I recall reading about study that said the same thing, though I can't find it now). So what typically happens is I make a change and then run the program to test it. That takes about 30 seconds, so I start reading something else, and before I know it 20 minutes have passed, and then it takes (if I'm lucky!) another 10+ minutes to deal with the context switch to getting back into programming. It's not an exaggeration to say that some things that should take me minutes literally take hours to complete. I'm very curious about what other programmers do to combat this tendency (or if I'm unique and they don't have this tendency?). Suggestions of any type at all are welcome - anything from "sit on your hands after hitting the compile button", to mental tricks, to "if it takes 30 seconds to start up something to test a change, then something's wrong with your development process!"

    Read the article

  • How can I make permanent death in a MUD seem acceptable and fair to players?

    - by Luke Laupheimer
    I have considered writing a MUD for years, and I have a lot of ideas my friends think are really cool (and that's how I'd hope to get anywhere -- word of mouth). Thing is, there's one thing I have always wanted, that my friends and strangers hated: permanent death. Now, the emotional response I get to this is visceral revulsion, every time. I'm pretty sure I am the only person that wants this, or if I'm not, I'm a tiny minority. Now, the reason I want it is because I want the actions of the players to matter. Unlike a lot of other MUDs, which have a set of static city-states and social institutions etc, I want the things my players do, should I get any, to actually change the situation. And that includes killing people. If you kill someone, you didn't send them to time out, you killed them. What happens when you kill people? They go away. They don't come back in half an hour to smack talk you some more. They're gone. Forever. By making death non-permanent, you make death not matter. It would be similar if a climax to a character's arc is getting a speeding ticket. It cheapens it. Non-permanent death cheapens death. How can I: 1) Convince my players (and random people!) that this is actually a good idea?, or 2) Find some other way to make death and violence matter as much as it does in real life (except within the game, of course) sans character deletion? What alternatives are there out there?

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • How should I architect a personal schedule manager that runs 24/7?

    - by Crawford Comeaux
    I've developed an ADHD management system for myself that's attempting to change multiple habits at once. I know this is counter to conventional wisdom, but I've tried the conventional for years & am now trying it my way. (just wanted to say that to try and prevent it from distracting people from the actual question) Anyway, I'd like to write something to run on a remote server that monitors me, helps me build/avoid certain habits, etc. What this amounts to is a system that: runs 24/7 may have multiple independent tasks to run at once may have tasks that require other tasks to run first lets tasks be scheduled by specific time, recurrence (ie. "run every 5 mins"), or interval (ie. "run from 2pm to 3pm") My first naive attempt at this was just a single PHP script scheduled to run every minute by cron (language was chosen in order to use a certain library, but no longer necessary). The logic behind when to run this or that portion of code got hairy pretty quick. So my question is how should I approach this from here? I'm not tied to any one language, though I'm partial to python/javascript. Thoughts: Could be done as a set of scripts that include a scheduling mechanism with one script per bit of logic...but the idea just feels wrong to me. Building it as a daemon could be helpful, but still unsure what to do about dozens of if-else statements for detecting the current time

    Read the article

  • Oracle Customer Experience (CX) Solutions Make Retailers Merry

    - by Tuula Fai
    Tis the season to be jolly. If you’re a retailer, your level of jolliness depends on sales. So you watch trends like U.S. store traffic increasing 3.5% to 308 million on Black Friday but sales actually falling 1.8% to $11.2 billion. Fortunately, by the end of November, retail sales were up 3.7% over the previous year, thanks to life recovering after Hurricane Sandy. And online sales topped $1 billion for the first time ever! Who are the companies improving their sales online? They are big names like Walgreen’s Drugstore.com, Nordstrom’s HauteLook, and Intuit. More importantly, how are they doing it? They use cutting-edge business practices enabled by Oracle’s CX Cloud Service & Support solutions to: Increase conversions rates and order sizes (Customer Acquisition) Enhance customer satisfaction and loyalty (Customer Retention) Reduce contact center costs and improve agent productivity (Operational Efficiency). Acquisition + Retention + Operational Efficiency = Sustainable Growth and Profits. That’s the magic formula for retail customer service success. Don’t take our word for it. Look at the results of these Oracle customers: Walgreen’s Drugstore—30% sales conversion rate on chat sessions with 20% increase in shopping cart size Nordstrom’s HauteLook—40,000+ interactions per month—20% growth over last year— efficiently managed by 40 agents, with no increase in IT costs Intuit—50% increase in customer satisfaction and 70% decrease in cost per interaction Using Oracle’s CX Cloud & Service solutions, these retailers deliver consistent, relevant, and personalized experiences across all touchpoints, including social, mobile, and web. Their ability to connect with customers anytime, anywhere—providing the right answer at the right time—helps them create a defensible advantage in the marketplace. Want to learn more? Please visit http://www.oracle.com/goto/cloudlaunchpad for free resources on delivering exceptional customer service in the Cloud. Also, watch our YouTube channel to learn more about seamless multichannel retail and Winston Furnishings’ exceptional customer experience.

    Read the article

  • Dual Monitor 'How To' for 12.04

    - by Kim Prince
    I recently built my own PC and was delighted with the result, except for a problem with dual monitors. After having tried a few different combinations of hardware, I think what I really need is a 'how to' explanation. My motherboard is an MSI Z77MA-G45, which has an analogue, a DVI, and a HDMI port. Initially I hooked monitors up to the DVI and analogue port and it seemed to work fine. Both screens worked independently of each other, it was great. After a few days I started turning my PC off at night, and when I tried to turn it back on it would boot into the terminal mode. I would have to turn one of the monitors off and after rebooting a few times, it would eventually boot into an X Window session. Occasionally I would see an error relating to Xorg. I upgraded the motherboard BIOS but that made no difference. Eventually, I installed a graphics card - an NVIDIA GeForce GT 520. Now it seems that my on board graphics have been disabled completely, and I am reliant on the graphics card. Furthermore, the graphics card seems to only recognise one screen at a time. (The first time I rebooted with both plugged in, it flashed up a message saying that it was auto-selecting DVI). Anyhow, I think I need some 'how to' (or perhaps 'where to'), from here. For example, is X Windows configuration the next place to look? And how do I go about configuring X Windows? (Note that in Systems Settings it says my graphics driver is 'unknown', and when I ask it to detect monitors, it sees only the one!)

    Read the article

  • How to identify process that's sending error messages to terminal?

    - by kjo
    The following error message occasionally appears in my terminal: Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory ...which is pretty annoying. I have searched online for solutions to this error, without success. Is there any way to at least identify the process responsible for sending these error messages to my terminal? EDIT: Let me clarify that, as far as I can tell, these error messages appear "out of the blue". In fact, they appear asynchronously with respect to my interactions with the terminal (more often than not see them for the first time when I return to a terminal window that has been unattended for some time). I'm sure there's a definite, deterministic cause for these messages, but it is not one that I can readily identify. In short, I have not noticed any pattern or regularity to their occurrence. In particular, in my case their occurrence has nothing to do with running mplayer, or any other video playback program. (Please my earlier post about it here.) For one thing, the machine in question is a work machine, and I rarely watch any videos with it. In the very few instances in which I've watched a video on this machine I've used VLC, not mplayer, and these errors never appeared in these rare occasions that I used VLC.

    Read the article

  • SharePoint Saturday Huntsville Wrap Up

    - by Mark Rackley
    So, Cathy Dew (@catpaint1) and company put on a great SharePoint Saturday event this past weekend. I got to hang out with some old friends and meet some new ones. I’d list you all, but I’d undoubtedly miss someone and don’t want to offend anyone.  Although I find it odd that I see @MossLover now more since she moved to New Jersey than when she lived next door in Kansas City… what’s up with that? Anyway, Cathy did a tremendous job organizing the event.  Everything went smoothly and everyone had a great time. Maybe I can talk her into organizing the rest of SharePoint Saturday Ozarks on June 12th… you know that’s coming up? right? While you’re here why not go ahead and register right now at: http://spsozarks.eventbrite.com/  Yes.. that was a shameless plug… I did my default presentation on “Wrapping Your Head Around the SharePoint Beast”. This continues to be my most popular presentation. I try to tweak it every time and I always have fun doing it. I get to pick on people and they pick on me back, but I always manage to learn something new when I present it. I had a great interactive crowd and they didn’t throw anything at me.  All in all I consider it a success.  Thanks for coming if you attended!  You can get the slides here:  SharePoint Saturday Huntsville - Wrapping Your Head Around the SharePoint Beast Next up for me is SharePoint Saturday DC on May 15th.  Wow this is going to be a huge event with space for 1500 attendees.. no, that is not a typo!  Stop me and say hi if you are able to make it!!

    Read the article

  • suggestions for lonewolf dev setup

    - by d33j
    I'm looking for some suggestions for a better development setup. Background: I'm a crusty old software engineer (mostly java of late) and I have around 50 - 100 incomplete java projects scattered everywhere, usb keys, HDDs, and spanning across 5 or 6 computers etc, which have been put on hold for a few years (ie: family). I have no version control at home. I've been using IntelliJ for around 10 years, so that's the only constant. I'm thinking of nominating one machine as a headless server to put all my projects on, maybe a ubuntu box, that way It won't matter which device I'm on, all my projects can be accessed (and I don't have to waste time actually looking for them). I don't need to access code over the net. These are my own 'happy place' projects so I only work on them when I'm at home, however I can see the benefit of the tasking app being online, that way if I think of something while on public transport lets say, I can add it then & there, but it's not a requirement. I can wait until I get home to create tasks. Summary: So I need some sort of version control so I can rollback mistakes, and some sort of simple tasking software where I can assign tasks for myself later on when I get time. I use Subversion, Sonar, Jira and Crucible at work but I think it's a little bit of an overkill for me though. What do you suggest?

    Read the article

  • Why would one bother marking up properly and semantically?

    - by Madara Uchiha
    Note that I (try) to mark up as semantically as possible because I like they way it looks and feels, but not because I'm aware of any other stunning advantages. The point of my question is to be able to educate others Well, I've seen a lot of articles and tutorials which often state "Let's mark this up in the most semantic possible way". But a strange thought came to me, why? Why would one need (or want) to bother with the specific elements which convey the correct semantic meaning? Specifically, I'm referring to the new HTML5 elements, such as <time>, <output>, or <address>. Especially, if the page "works" (it renders nicely in all browsers). Why would I want to use elements like <time> or <address>, where nothing at all (or at the worst case, a generic <span>) works just as nicely? I'm asking this because I'm seeing a multitude of (very popular) websites (this one included) which does not follow these so-called best practices.

    Read the article

  • Optimising website IP for location

    - by Liam Sorsby
    From my understanding of SEO, websites are optimised for the current location of their IP address. For example if xxx.xxx.xxx.xx resolves to the UK then you are more likely to get higher rankings in the UK then you are in the USA. However, my query is when you use a CDN you are storing a cached version of your website across multiple servers at strategic locations across the globe to reduce load time in locations that your trying to target. Now if you use a CDN and geo-locate the website URL then it only resolves back to the USA (where our IP address resolves too) but it doesn't say it resolves to any other countries. As far as I know you can have multiple IP address resolving to one domain (from different countries). Do CDN's really help to optimise the location of your website or are they soley meant to optimise load time? Is there a better way to optimise for multiple countries with regards to the resolution of the IP address? Are VPN's as per this post here relevant to this? Any advice would be helpful.

    Read the article

  • How to stay creative when going through tough emotional times (divorce, family death, etc)? [closed]

    - by gaearon
    Hi everyone. I believe this is not a duplicate of motivation question because I want to especially emphasize the emotional breakdown. You may conquer lack of motivation by working harder and getting through the dip, however this was not the case when I was separating with my girlfriend. I actually liked the project, it was (and it still is!) my first programming job at an amazing workplace and I wasn't being pressured in any way but I found myself absolutely unable to code, blankly staring at the screen, my thoughts disorganized, the feeling of emptiness all in my chest. I could perform some straightforward coding but anything that involves creative thinking, designing abstractions, solving new problems and, worst of all, fixing bugs in legacy code, completely wiped out my brain to the point I started avoiding work, which I never have done before. Coffee only used to make it worse. Eventually I got over that, and I remember the happy day I solved a problem elegantly and thought—hell, first time in a month! Thankfully the project wasn't top priority and I had the time to catch up. I wonder now, was there any other way to boost my productivity back then? I bet people would say I should've taken a break—and I think I really should have—but what if I needed the money? Didn't want to lose my job? Are there any ways to trick your brain into being creative despite emotional losses? From your experience, would it be worth talking to my boss, collegues?

    Read the article

  • NINTENDO, EDCON and ALLEGIS GROUP @ Oracle Open World 2012 Conference Session (CON9418): The Business Case for Oracle Exalogic: A Customer Perspective

    - by Sanjeev Sharma
     Are you looking to deliver breakthrough performance for packaged and custom  applications? For many front-office applications such as Oracle WebCenter Sites, Oracle Transportation Management, and Oracle’s ATG and Siebel product families,  improved  performance leads directly to greater revenue or cost savings from the business - a  compelling  proposition. For back-office applications, improved performance has tangible benefits  in terms of  footprint reductions. For all applications, Oracle Exalogic and Oracle Exadata provide an engineered solution that provides shorter time to value and lower operational costs.  Edcon is a leading clothing, footwear and textiles (CFT) retailing group in southern Africa trading through a range of retail formats. The Company has grown from opening it's first store in 1929, to ten retail brands trading in over 1000 stores in South Africa, Botswana, Namibia, Swaziland and Lesotho. Edcon's retail business has, through recent acquisitions, added top stationery and houseware brands as well as general merchandise to its CFT portfolio. Edcon was looking to consolidate their existing middleware components (Weblogic and Oracle SOA) and retail applications (Retek, Siebel and E-Business Suite) on a common platform and turned to Oracle Exalogic. With Oracle Exalogic, Edcon is able to derive significant HW CAPEX savings, improve response-time of core business applications and mitigate operating risk. Hear senior business leaders from Nintendo, Edcon and Allegis Group discuss how the business value of  leveraging Oracle Exalogic at the following conference session at Oracle Open World 2012: Session:  CON9418 - The Business Case for Oracle Exalogic: A Customer PerspectiveDate: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone South (306)

    Read the article

  • share distribution question

    - by facebook-100000781341887
    Hi, I just developed a facebook game(mifia like), but the graphic I make is not good, because it is reference with some existing photo, trace with AI, and coloring it. Therefore, I invite my friend to join me, he is a graphic designer, own a company with his friend (I know both of them), for the share, I expect at least 70% for me, and at most 30% for them (both of them want to join). Therefore, they give me a counter offer, 60% for me and 40% for them, of course, I feel their counter offer is unacceptable because they only build the image in part time, and all the other work just like coding, webhosting...etc, is what I do in full time. Why they said they worth 40% is that they will make a good graphic, they can provide a advertise channel(on local magazine), etc... Actually, I don't think the game need advertisement on local magazine because the game is not target for local... Please give me some comments on this issue(is the share fair? what is the importance of the image of the game, is it worth more than 30%), or can anyone share the experience on this. Thanks in advance.

    Read the article

  • Function calls to calls in windows api

    - by Apeee
    I am a beginner, and learning C, I find it hard to grasp the whole programming concept. so hopefully this would help to clear up some things along the way. When programming in windows, which is my aim for the time being, it is really hard for me to understand how windows communicate with the programs that run on it. A question i have been pondering about is how when you incorporate a function call which is in another memory location on the disk or memory(not a function you yourself wrote and is included in the compilation), especially the windows API, does the compiler know where the function location is so when the program is run it can call that function? For example, a very simple program that displays a window which reads hello world. You would have to call windows API functions to achieve such features as creating the window, its size, colors and so on... So basically what I am struggling to grasp is how the programs I write communicate with the platform, framework they are run on(generally windows for Windows API). Apart from clarification on this one above, i would love a resource that explains this concept further. Thanks for your time!

    Read the article

  • Develop in trunk and then branch off, or in release branch and then merge back?

    - by Torben Gundtofte-Bruun
    Say that we've decided on following a "release-based" branching strategy, so we'll have a branch for each release, and we can add maintenance updates as sub-branches from those. Does it matter whether we: develop and stabilize a new release in the trunk and then "save" that state in a new release branch; or first create that release branch and only merge into the trunk when the branch is stable? I find the former to be easier to deal with (less merging necessary), especially when we don't develop on multiple upcoming releases at the same time. Under normal circumstances we would all be working on the trunk, and only work on released branches if there are bugs to fix. What is the trunk actually used for in the latter approach? It seems to be almost obsolete, because I could create a future release branch based on the most recent released branch rather than from the trunk. Details based on comment below: Our product consists of a base platform and a number of modules on top; each is developed and even distributed separately from each other. Most team members work on several of these areas, so there's partial overlap between people. We generally work only on 1 future release and not at all on existing releases. One or two might work on a bugfix for an existing release for short periods of time. Our work isn't compiled and it's a mix of Unix shell scripts, XML configuration files, SQL packages, and more -- so there's no way to have push-button builds that can be tested. That's done manually, which is a bit laborious. A release cycle is typically half a year or more for the base platform; often 1 month for the modules.

    Read the article

  • 'Buy the app' landing page implementations: redirect or javascript popup?

    - by benwad
    My site (using Django) has an app that I'm trying to push - I currently have a piece of middleware that redirects the user to a page advertising the app if they're accessing the page on the iPhone, then setting a cookie so that the user isn't bugged by the message every time they visit the site. This works fine, however checking the page with the mobile Googlebot checker shows that the Googlebot gets stuck in the redirect (since it doesn't store cookies) and therefore won't index the proper content. So, I'm trying to think of an alternative implementation that won't hurt the site's Google ranking and won't have any other adverse effects. I've considered a couple of options: Redirect (the current solution), but don't redirect if the user agent matches the Googlebot's UA string. This would be ideal, however I'm not sure if Google like their bot being treated differently from other users, and I'm afraid the site's ranking may be somehow penalised if I go ahead with this. Use a Javascript popup instead of a redirect. This would make sure the Googlebot finds the content it needs, however I envision this approach causing compatibility issues with the myriad mobile devices/browsers out there, and may affect the page load time. How valid are these options? And is there a better option for implementing this feature out there? I've tried researching this topic but surprisingly can't find any reputable-looking blog posts that explore this topic.

    Read the article

  • Is there really anything to gain with complex design? [duplicate]

    - by SB2055
    This question already has an answer here: What is enterprise software, exactly? 8 answers I've been working for a consulting firm for some time, with clients of various sizes, and I've seen web applications ranging in complexity from really simple: MVC Service Layer EF DB To really complex: MVC UoW DI / IoC Repository Service UI Tests Unit Tests Integration Tests But on both ends of the spectrum, the quality requirements are about the same. In simple projects, new devs / consultants can hop on, make changes, and contribute immediately, without having to wade through 6 layers of abstraction to understand what's going on, or risking misunderstanding some complex abstraction and costing down the line. In all cases, there was never a need to actually make code swappable or reusable - and the tests were never actually maintained past the first iteration because requirements changed, it was too time-consuming, deadlines, business pressure, etc etc. So if - in the end - testing and interfaces aren't used rapid development (read: cost-savings) is a priority the project's requirements will be changing a lot while in development ...would it be wrong to recommend a super-simple architecture, even to solve a complex problem, for an enterprise client? Is it complexity that defines enterprise solutions, or is it the reliability, # concurrent users, ease-of-maintenance, or all of the above? I know this is a very vague question, and any answer wouldn't apply to all cases, but I'm interested in hearing from devs / consultants that have been in the business for a while and that have worked with these varying degrees of complexity, to hear if the cool-but-expensive abstractions are worth the overall cost, at least while the project is in development.

    Read the article

  • Tracking down memory issues affecting a website

    - by gaoshan88
    I've got a website (Wordpress based) that became unresponsive. I SSH'd into the server and saw that we were out of memory. Errors in my apache log files indicated the same... things failing to be allocated due to lack of memory). Restarting the server fixes it. So I look in access.log and error.log around the time of the incident but I see nothing strange. No extra traffic, no unusual requests. In fact the only request around the time of the problem was one from Googlebot for an rss feed... at that point I start to see 500 response codes in the logs until the machine was rebooted. I look in message.log hoping to see something but there is nothing at all for that entire day (which is odd as there are entries for every other day). The site has a large amount of memory allocated to it and normally runs using about 30% of what is available. My question... how would you go about trying to track this down at this point? What are some other log files I could check or strategies I could take?

    Read the article

  • Implementing new required feature after software release

    - by TiagoBrenck
    Fake Scenario There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. New Feature One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. Problem I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it

    Read the article

  • When is it ever ok to write your own development tools? (editor into IDE)

    - by mario
    So I'm foremost using a text editor for coding. It's a very bare bones editor; provides mostly just syntax highlighting. But on rare occasions I also need to debug something. And that's when I have to resort to an IDE (mostly Netbeans, but got fiddly Eclipse/Aptana working as second fallback). For general use however IDEs feel not workable to me. It's a visual thing, being used to console UIs etc. And switching back and forth between a text editor and an IDE is slightly cumbersome too. That's why I'm considering extending the editor, not really into a full-fledged IDE - but at the very least integrate a debug feature. Since I'm working on PHP, it seems not that much effort. The DBGp allows to externalize a debug handler from the editor, so it's just minor integration work and figuring out how to shoehorn a breakpoint feature into the editor (joe btw). And while I've also got time to do that, I'm wondering if this is really worthwhile. In this case it's not a needed development tool. It's just for convenience. And the cause for doing it is basically just not liking the existing solution. While over time I might extend and adapt this debugger thing, it initially will be as circumstantial as Eclipse. It inevitably starts out as poor development tool. Furthermore there is likely not much reuse. (Okay, this is not an important point. Most such software exists sans much of a use case. And also obviously, similar extensions already exist for emacs and vim, so it cannot be completely pointless.) But what's a general guideline on attempting to conoct custom development tools, particularily if they are not really needed but satisfy personal preferences? (Usability enhancement not certain.)

    Read the article

  • New SPC2 benchmark- The 7420 KILLS it !!!

    - by user12620172
    This is pretty sweet. The new SPC2 benchmark came out last week, and the 7420 not only came in 2nd of ALL speed scores, but came in #1 for price per MBPS. Check out this table. The 7420 score of 10,704 makes it really fast, but that's not the best part. The price one would have to pay in order to beat it is ridiculous. You can go see for yourself at http://www.storageperformance.org/results/benchmark_results_spc2The only system on the whole page that beats it was over twice the price per MBPS. Very sweet for Oracle. So let's see, the 7420 is the fastest per $. The 7420 is the cheapest per MBPS. The 7420 has incredible, built-in features, management services, analytics, and protocols. It's extremely stable and as a cluster has no single point of failure. It won the Storage Magazine award for best NAS system this year. So how long will it be before it's the number 1 NAS system in the market? What are the biggest hurdles still stopping the widespread adoption of the ZFSSA? From what I see, it's three things: 1. Administrator's comfort level with older legacy systems. 2. Politics 3. Past issues with Oracle Support.   I see all of these issues crop up regularly. Number 1 just takes time and education. Number 3 takes time with our new, better, and growing support team. many of them came from Oracle and there were growing pains when they went from a straight software-model to having to also support hardware. Number 2 is tricky, but it's the job of the sales teams to break through the internal politics and help their clients see the value in oracle hardware systems. Benchmarks like this will help.

    Read the article

  • TechEd North America 2012-Day 4 #msTechEd #teched

    - by Marco Russo (SQLBI)
    I hadn’t time yesterday to write a blog post before of the beginning of the day and I recover now that I’m already back to Europe. I spent many hours at the Microsoft booth meeting people who asked questions about Tabular, Power View and PowerPivot. I have to say that I’m really glad so many people started using PowerPivot and of the large interest around Tabular. During my BISM: Multidimensional vs. Tabular I’ve been helped by Alberto during the demo (we joked about he was the demo monkey) and the feedback received are good. I tried to compare the strength and weakness of the two modeling options without spending time describing the area in which they are similar. During the days before I had many discussions about scenarios based on snapshots and I added a few slides to the presentation in order to cover this area that I thought was marginal but seems to be a very common one. To know more, watch the presentation when it will be available or wait for some article I’ll write in the future describing these patterns. In less than 10 days, I’ll be at TechEd Europe and I’m really looking forward to meet other Tabular developers and PowerPivot users there!

    Read the article

  • What kinds of issues can one expect when changing a domain names registar? (3 questions)

    - by anonymous-one
    Assuming that there are no 'unusual' items that come up, what kind of disruptions can one expect when moving a domain between registrars? I understand some of the below may vary over registrars. But assuming both ends are large proficient registrars: a) Will the NS settings be mirrored? We use a dedicated dns service provider so we are not using the originating registrars name servers. All that we are concerned about is that the existing NS values are mirrored at the target registrar. b) Are incoming domain transfers automated on the target registrar end? Eg: If we begin the transfer process during business hours at the source registrar, will someone have to manually approve the inbound transfer (most likely during their business hours) at the target registrar? c) Is the domain ever 'in limbo'? At any time in the process is there ever a time when the NS values for the domain are not populated (as they were prior to initiating the transfer) OR one does not have access to populate them (at the target registrar)? Thank you kindly for the help.

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >