Search Results

Search found 16772 results on 671 pages for 'charles long'.

Page 106/671 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • SonicAgile 2.0 with a Real-Time Backlog and Kanban

    - by Stephen.Walther
    I’m excited to announce the launch of SonicAgile 2.0 which is a free Agile Project Management tool.  You can start using it right now (for free) by visiting the following address: http://sonicagile.com/ What’s special about SonicAgile?  SonicAgile supports a real-time backlog and kanban. When you make changes to the backlog or kanban then those changes appear on every browser in real-time. For example, if multiple people open the Kanban in their browser, and you move a card on the backlog from the To Do column to the Done column then the card moves in every browser in real-time. This makes SonicAgile a great tool to use with distributed teams. SonicAgile has all of the features that you need in an Agile Project Management tool including: Real-time Backlog – Prioritize all of your stories using drag-and-drop. Real-time Kanban – Move stories from To Do, In Progress, to Done Burndown Charts – Track the progress of your team as your work burns down over time. Iterations – Group work into iterations (sprints). Tasks – Break long stories into tasks. Acceptance Criteria – Create a checklist of requirements for a story to be done. Agile Estimation – Estimate the amount of work required to complete a story using Points, Shirt Sizes, or Coffee Cup sizes. Time-Tracking – Track how long it takes to complete each story and task. Roadmap – Do release planning by creating epics and organizing the epics into releases. Discussions – Discuss each story or epic by email. Watch the following video for a quick 3 minute introduction: http://sonicagile.com/ Read the following guide for a more in-depth overview of the features included in SonicAgile: http://sonicagile.com/guide-agile-project-management-using-sonicagile/ I’d love to hear your feedback!  Let me know what you think by posting a comment.

    Read the article

  • IT lead does not have a backup, DR plan in writing

    - by Alex
    This is a general management question to IT managers out there. We are a small firm with about 4 servers in our colo cabinent. No full time IT manager. But we do have one person on monthly contract and I am having a terrible time getting him to share what these plans actually are. I am sure he HAS a plan (and its probably in his head..) but that does us no good if he gets hit by a bus.. How would you guys handle this? He is a long time friend, but I fear this is dangerous for us long term..I have confronted him on several occasions about this, and he tells me not to worry, he has go it covered.. Thanks.

    Read the article

  • Using Performance Monitor To Get IIS7 Response Turnaround Time

    - by alphadogg
    I have a MVC2 web application on W2KR2/IIS7 that I'd like to benchmark/monitor. Some XHR requests by a browser-based client are suddenly taking 8-10 sec when they used to take much less time (as per Chrome Developer Tool timings). The underlying SQL Server queries, using the same params, runs in 1.4s according to total execution time client statistics from SSMS. I'm assuming that there are various counters that can specifically dissect time taken/waiting/processing between IIS7 itself and the web application? For example, can I check how long it takes to get a response from IIS7 app and DB? How about how long it takes to serve IIS7?

    Read the article

  • How not to show user name beside each consecutive message

    - by Joce
    After years on GTalk, I recently switched to Pidgin for my IM needs. Almost all is ok with me, though there's on thing that bugs me. It shows the name of the author beside each consecutive message. e.g. joce: Hey joce: Long time no see! joce: How are you? Bob: Hey! Bob: Indeed! Instead of (as it is in GTalk): joce: Hey Long time no see! How are you? bob: Hey! Indeed! Is there a setting that I've missed or a plugin that could do that for me?

    Read the article

  • Windows shutdown processes termination sequence

    - by jpmartins
    I've seen today an wierd situation. I have a theory, but it would help to know more about the windows shutdown process. If you have some knowlaged about it please share. A machine was shutdown (at this moment I suspect an unexpected mantainace), on that machine there was a long running process that was interrupted. Monitorization confirms that the process did not terminated normally. Loking at the logs for the long running process it seem that was just finishing. That seems higly unprobable since it was running for more than 6 hours (witch is a bit more than the usual 5 hours). The process lanches child processes and waits for results from them, I suspect pour error control on the parent process and that the shutdown as terminated child processes before.

    Read the article

  • SQL Monitor Alerts in Outlook Without Configuring Email Settings

    - by Fatherjack
    SQL Monitor is a Red Gate tool that I have a long history with and I have worked closely with the development team from a time before it was called SQL Monitor. It is with that history in mind I am a little disappointed in myself that I have only just found out about a pretty cool feature. Out of the box SQL Monitor keeps itself to itself, it busily goes about watching over your servers, noting down when things look suspicious, change drastically or are just out and out wrong. You have to go into the settings and provide email details (SMTP server, account details etc.) before it starts getting at all intrusive with warning and alerts on the condition of your servers. However, it was after installing the most recent version that I was going through the application screen by screen looking for new and interesting changes that I noticed something that had avoided my attention. On the Alerts tab there is an option in the left hand menu. I don’t know how long ago it appeared or why I have never explored it previously but it appears that you can see your Alerts in the format of an RSS feed. Now when you click that link you are taken to a page that is the raw RSS XML – not too interesting but clearly you can use this in an RSS aggregator. Such as Outlook. Note the URL in the newly opened page take it with you into Outlook. For me it is in the form of http://SQLMonitorServerName/Alerts/Inbox/Feed. Again, this is something that I have only recently noticed – Outlook can aggregate RSS feeds. Down below the Inbox, Drafts folders etc, one up from the bottom is RSS Feeds. If you right click that and choose to Add a feed then you can supply the URL for SQL Monitor Alerts: And there you have it, your SQL Monitor Alerts available in Outlook where you can keep an eye on the number of unread items and pick them off at your convenience.

    Read the article

  • The Cloud is STILL too slow!

    - by harry.foxwell(at)oracle.com
    If you've been in the computing industry sufficiently long enough to remember dialup modems and other "ancient" technologies, you might be tempted to marvel at today's wonderfully powerful multicore PCs, ginormous disks, and blazingly fast networks.  Wow, you're in Internet Nirvana, right!  Well, no, not by a long shot.Considering the exponentially growing expectations of what the Web, that is, "the Cloud", is supposed to provide, today's Web/Cloud services are still way too slow.Already we are seeing cloud-enabled consumer devices that are stressing even the most advanced public network services.  Like the iPad and its competitors, ever more powerful smart-phones, and an imminent hoard of special purpose gadgets such as the proposed "cloud camera" (see http://gdgt.com/discuss/it-time-cloud-camera-found-out-cnr/ ).And at the same time that the number and type of cloud services are growing, user tolerance for even the slightest of download delays is rapidly decreasing.  Ten years ago Web developers followed the "8-Second Rule", (average time a typical Web user would tolerate for a page to download and render).  Not anymore; now it's less than 3 seconds, and only a bit longer for mobile devices (see http://www.technologyreview.com/files/54902/GoogleSpeed_charts.pdf).  How spoiled we've become!Google, among others, recognizes this problem and is working to encourage the development of a faster Web (see http://www.technologyreview.com/web/32338/). They, along with their competitors and ISPs, will have to encourage and support significantly better Web performance in order to provide the types of services envisioned for the Cloud.  How will they do this? Through the development of faster components, better use of caching technologies, and the really tough one - exploiting parallelism. Not that parallel technologies like multicore processors are hard to build...we already have them.  It's just that we're not that good yet at using them effectively.  And if we don't get better, users will abandon cloud-based services...in less than 3 seconds.

    Read the article

  • An Overview of Batch Processing in Java EE 7

    - by Janice J. Heiss
    Up on otn/java is a new article by Oracle senior software engineer Mahesh Kannan, titled “An Overview of Batch Processing in Java EE 7.0,” which explains the new batch processing capabilities provided by JSR 352 in Java EE 7. Kannan explains that “Batch processing is used in many industries for tasks ranging from payroll processing; statement generation; end-of-day jobs such as interest calculation and ETL (extract, load, and transform) in a data warehouse; and many more. Typically, batch processing is bulk-oriented, non-interactive, and long running—and might be data- or computation-intensive. Batch jobs can be run on schedule or initiated on demand. Also, since batch jobs are typically long-running jobs, check-pointing and restarting are common features found in batch jobs.” JSR 352 defines the programming model for batch applications plus a runtime to run and manage batch jobs. The article covers feature highlights, selected APIs, the structure of Job Scheduling Language, and explains some of the key functions of JSR 352 using a simple payroll processing application. The article also describes how developers can run batch applications using GlassFish Server Open Source Edition 4.0. Kannan summarizes the article as follows: “In this article, we saw how to write, package, and run simple batch applications that use chunk-style steps. We also saw how the checkpoint feature of the batch runtime allows for the easy restart of failed batch jobs. Yet, we have barely scratched the surface of JSR 352. With the full set of Java EE components and features at your disposal, including servlets, EJB beans, CDI beans, EJB automatic timers, and so on, feature-rich batch applications can be written fairly easily.” Check out the article here.

    Read the article

  • Did you forget me?

    - by Ratman21
    I know it has been a long time since I last blogged. Still at it, looking for work in the “IT” field. Had another phone interview (only found out during the interview that it was for one year contract job, but I still would take it) for a Help Desk job. Didn’t get it, they thought I was not a application support person and more of a hardware support. Gee..I started out in “IT” as a programmer. Then a programmer/computer operator, then a Tandem/Lan operator and finally a Network operator. I had to deal with so many different operating systems, software and applications.   And they thought I was too hardware. Well I am working a temp day job with the U.S. Census. It gets me out of the house and out in the country. If find getting paid to check for living quarters not bad job, except for the many houses I find that are up for sale and looks like it was not the owners (former owners it seems) idea, with the kids toys still in the yards. Not good for some one with a over active imagine or for my truck. So far I have backed in to ditch (and had to be pulled out), in to power pole (no damage to pole and very little to truck) and a mail box (no damage to truck but mail box was leaning a little) in the last two weeks.   Oh an I have started reading/using “The Love Dare” book from the movie “Fireproof”. I restarted (yes I have had to go back to day one from day five) the dare this Sunday. Dare one dare/day one “Love Is Patient” and the first dare is (reading from the book is): “The first part of this dare is fairly simple. Although Love is communicated in a number of ways. Our words often reflect the condition of our heart. For the next day, resolve to demonstrate patience and to sys nothing negative to your spouse at all. If the temptation arises, choose not to say anything. It’s better to hold your tongue that to say something you’ll regret. “. This was almost too easy as I can hold back from saying anything bad to any one but, this can also be a problem in life (you hold back for so long and!!!!!!!!!!!!!! Boom). Check back for dare/day two “Love Is Kind”.

    Read the article

  • Friday Fun: Sydney Shark

    - by Asian Angel
    Another long week is finally coming to an end, so why not have a little fun to finish the week out before going home? In today’s game you become a powerful shark determined to turn the Sydney coastline into one long smorgasbord while causing as much mayhem and destruction as possible along the way. Note: You will most likely encounter a small video ad as the game is loading and then again when your final score is displayed. Both come with a small clickable link for skipping the ads. Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) WizMouse Enables Mouse Over Scrolling on Any Window Enhance GIMP’s Image Editing Power with Gimp Paint Studio Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • Copying 500GB Data to EC2 Instances Local Drive

    - by iCode
    Please do not ask me why (they made me) but I have to copy 500GB data to the local drive every 200 node/instances that I am launching in EC2. For reasons beyond this post, this data must by on the local drive and not EBS drive so I can not benefit from snapshots. What is the fastest way that I can manage to this? Copying from S3 to each node takes a long time. I trying to attached an EBS volume to every node with the data and then copy the data from EBS to the local drive but that also take a long time (several hours_) Now, I am also thinking to use bit torrent but not sure how well it is going to be. What is the best way to copy 500GB of static data to each local drive of 200 ec2 instances? The 500Gb of data is composed several hundred of file with varying size but the biggest file is 20GB.

    Read the article

  • People's experience of Cloud Computing (using Force.com)

    - by Digger
    I would like to know about people's experience of working with APEX and the SalesForce.com platform, was it easy to work with? How similar to Java and C# is it? What did you like? What don't you like? Would you recommend it? Do you think cloud computing has a long term successful future? My reason for asking is that I am currently looking at a new position which involves working with APEX on the SalesForce.com platform. The position interests me but I just want to try and understand what I might be signing up for with regards the languages/platform as it is completely different from what I have worked with before. I have seen lots of videos/blog posts online (mainly from the recent Dreamforce event) and they obviously are very positive but I was just after some experiences from developers, both positive and negative. I find cloud computing a very interesting idea, but I am very new to the subject. The position I am looking at offers a fantastic opportunity but I was just after some opinions on APEX and the platform as I have no real world experience just what I have seen from the online videos. I guess ultimately what I am asking is: Are APEX and the SalesForce.com platform good to get involved in? Is development on the Force.com just a "career dead end"? Is cloud computing just a fad? Or does it have a long term future? Apologies in advance if this is the wrong place to ask such a question. Thanks

    Read the article

  • Searching Your PL/SQL Source with Oracle SQL Developer

    - by thatjeffsmith
    Version 3.2.1 included a few tweaks along with several hundred bug fixes. One of those tweaks was the addition of ‘ALL_SOURCE’ as a selection for the Type drop down in the Find Database Object panel. Scroll ALL the way down to the bottom Searching the database for your code or objects can be expensive. The ALL_SOURCE view comes in pretty handy when I want to demo how to cancel long running queries or the Task Progress panel – did you know you can manage all of your long running queries here? Yeah, don’t run this I pretty much hosed our demo pod at Open World b/c I ran that same query but added an ORDER BY b.TEXT DESC to the query and blew up the TEMP space and filled the primary partition on the image. Fun stuff. Anyways, where was I going with this? Oh yeah, searching ALL_SOURCE can be expensive. So we took it out of the product for awhile. And now it’s back in. If you select the ‘ALL’ field, it doesn’t actually search EVERYTHING, because that would probably be less than helpful. So if you want to search your PL/SQL objects for a scrap or bit of code, use the ‘ALL_SOURCE’ option in v3.2.1 Double-Click on the search results to go to the code you’re looking for. Be careful what you search for. Just like any query, it could take awhile.

    Read the article

  • Safer RAID5 rebuilds using partially failed disks?

    - by arcticmac
    There have been lots of articles posted recently about how RAID5 is dangerous because of long resilver times, and in particular because of increasing chances of encountering a URE during the resilver. Obviously this is a significant concern. However, it seems that in many cases of interest (as long as you're keeping some kind of eye on your disks), when it comes time to rebuild the array, the disk that I'm replacing is still mostly readable. If you try to explain this predicament to the average layperson, they are typically very confused as to why you have two almost completely functional disks but can't produce one working array. It seems to me that there ought to be some way to take advantage of this to make rebuilds safer, as long as I'm willing to have the RAID5 be read-only for a couple of days while it rebuilds. Conceptually, what I have in mind looks something like this: When a disk fails, immediately take the RAID5 offline or mount it read-only Attach a new disk (either in a spare bay, or externally via eSATA) and begin rebuilding it to replace the failed one. If known, perhaps start with the stripes in which the failure occurred, to minimize the chances of losing those if another disk fails. In the event that a second disk experiences a URE or other failure during the rebuild, try to source that data from the disk that is being replaced. Presumably if this happens, more rebuilding would be necessary. When complete, shut down the server, swap the replacement drive into the original bay if desired, and bring the array back up. Obviously such a process would not be appropriate for applications where uptime is critical or data loss cannot be tolerated, but it seems to me that this could help considerably to improve the reliability of RAID5. I assume that there's not a good way to implement a recovery like this at present, given that I haven't seen any indication of tools that are designed to do this, and that it seems like it would be rather obtuse to work out manually. Are there also technical issues with it that I haven't thought of (I'm still fairly new to RAID stuff)? Any thoughts on how hard something like this would be to implement (e.g. in linux md raid)?

    Read the article

  • Designing a large database with multiple sources

    - by CatchingMonkey
    I have been tasked with redesigning, or at worst optimising the structure of a database for a data warehouse. Currently, the database has 4 other source databases (which is due to expand to X number of others), all of which have their own data structures, naming conventions etc. At the moment an overnight SSIS package pulls the data from the various source and then for each source coverts the data into a standardised, usable format. These tables are then appended to each other creating a 60m row, 40 column beast!. This table is then used in a variety of ways from an OLAP cube to a web front end. The structure has been in place for a very long time, and the work I have been able to prove the advantages of normalisation, and this is the way I would like to go. The problem for me is that the overnight process takes so long I don't then want to spend additional time normalising the last table into something usable. Can anyone offer any insight or ideas into the best way to restructure or optimise the database efficiently? Edit: All the databases are MS SQL Server 2008 R2 Thanks in advance CM

    Read the article

  • Calculating Cloud Service Costs [closed]

    - by capdragon
    Possible Duplicate: Can you help me with my capacity planning? I would like to scale a web application to the Cloud and wanted to know if anyone had any experience calculating costs and could tell me how I would go about that. I have NO experience with cloud services at all. Currently my production environment consists of two web servers and one database server. If the application continues on a linear growth path, eventually, I want to scale to the cloud to avoid any more long term commitments to extra hardware. I want to be able to create a similar environment I have now as a baseline. Have this as my fixed cost that I will always have. I also want to calculate my variable costs that will increase with more users or bandwidth. I don't have a preferred cloud vendor. Amazon, Rackspace, Terremark or any other is fine as long as understand how to calculate my fixed and variable costs.

    Read the article

  • Security question pertaining web application deployment

    - by orokusaki
    I am about to deploy a web application (in a couple months) with the following set-up (perhaps anyways): Ubuntu Lucid Lynx with: IP Tables firewall (white-list style with only 3 ports open) Custom SSH port (like 31847 or something) No "root" SSH access Long, random username (not just "admin" or something) with a long password (65 chars) PostgreSQL which only listens to localhost 256 bit SSL Cert Reverse proxy from NGINX to my application server (UWSGI) Assume that my colo is secure (Physical access isn't my concern for the time being) Application-level security (SQL injection, XSS, Directory Traversal, CSRF, etc) Perhaps IP masquerading (but I don't really understand this yet) Does this sound like a secure setup? I hear about people's web apps getting hacked all the time, and part of me thinks, "maybe they're just neglecting something", but the other part of me thinks, "maybe there's nothing you can do to protect your server, and those things are just measures to make it a little harder for script kiddies to get in". If I told you all of this, gave you my IP address, and told you what ports were available, would it be possible for you to get in (assuming you have a penetration testing tool), or is this really protected well.

    Read the article

  • FoxTales: Behind the Scenes at Fox Software by Kerry Nietz

    Flash backs from the past! It's truly amazing to discover that software development from freshman to senior level as well as project management hasn't changed that much. Kerry Nietz describes his memoir from his final year at college to his first job at Fox Software to 'an early retirement' at Microsoft. This title also brought his other fictional novels to my attention. Once again here is the review I published on Amazon: Built to last! I have been around in software development for more than a decade now but honestly I have to admit it is only now that I took the opportunity to read about the history of my used to be primary programming language. In fact, I started with Visual FoxPro 6 back in 1999 and went only down to FoxPro for Windows 2.6 during migration projects - long after the stories described in this title. It is really interesting to see how they actually managed to create a great product with such a small team of developers. "Create the best Report Writer in the world, out of only sawdust, bubblegum, and dreams." - That's the best sentence I'm going to quote from this title in the future. An inspiration to achieve the impossible, only by taking small steps. Just begin the journey - one step after the next one. If you fall, stand up and continue to walk. Kerry takes the reader on an amazing trip through almost 4 years working at a small software company in Perrysburg, Ohio. That went from a another 'look-alike' of the mighty Ashton-Tate dBase to the leading force in database development, long before Microsoft Access (project name: Cirrus) was even finished. It survived Borland Paradox and even nowadays Visual FoxPro is still in daily use in thousands of companies world-wide. Actually, I'm glad that I had the chance to foster my programming knowledge with Visual FoxPro. After his excellent work in software development, Kerry went for a second career as a writer. I'm looking forward to read his other titles soon:

    Read the article

  • How do I prevent IIS 8 from stopping idle ASP.NET applications?

    - by Lambo Jayapalan
    I have an asp.net application running on Windows 2012 in IIS 8 that has a very time consuming application start process (essentially the code running in the Application_Start() event can take up to 2 minutes). Thus I'd like to minimize the number of times the application is started so that the user can avoid a long wait. I've enabled Preload in the application settings, and I've set the Start Mode to AlwaysRunning in the application pool. Yet the application still ends after not being used for a while, resulting in a very long time for the first visit to the website after the application shuts down. Does anyone have any ideas on how I can prevent this? Thanks

    Read the article

  • avr-gcc 4.7 on ubuntu 12.04

    - by birky
    Please how can I install avr-gcc version 4.7 on ubuntu 12.04? :~$ avr-gcc -v Using built-in specs. COLLECT_GCC=avr-gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/avr/4.5.3/lto-wrapper Target: avr Configured with: ../src/configure -v --enable-languages=c,c++ --prefix=/usr/lib --infodir=/usr/share/info --mandir=/usr/share/man --bindir=/usr/bin --libexecdir=/usr/lib --libdir=/usr/lib --enable-shared --with-system-zlib --enable-long-long --enable-nls --without-included-gettext --disable-libssp --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=avr Thread model: single gcc version 4.5.3 (GCC) :~$ gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.6/lto-wrapper Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)

    Read the article

  • Apache no longer starts at Windows boot up

    - by w3d
    I have Apache installed as part of XAMPP - local test server. It is configured as a Windows (XP) Service. Startup type is "Automatic". For a long time now it has always started when Windows boots up, but recently this has stopped happening. I now need to start it manually via the XAMPP Control Panel - at which point it appears to start up perfectly OK. The only recent updates to the machine (that I recall) are Windows Updates - none of which appear to have "known issues" that relate to this. And updates to Google Chrome. Any ideas what could prevent Apache from starting automatically at Windows (XP) boot up? EDIT#1 There are 2 related Errors in my system event log regarding the Service Control Manager: Timeout (30000 milliseconds) waiting for the Apache2.2 service to connect. The Apache2.2 service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. When I manually start the Apache server after boot up there are 2 "information" events stating that it was "sent a start control" and that it "entered the running state". Although I notice it appears to take 19 seconds between the start control being sent and entering a running state - according to the event log. So, maybe 30 seconds during boot up isn't long enough (anymore) for Apache to start??

    Read the article

  • Automated Error Reporting in .NET Reflector - harnessing the most powerful test rig in existence

    - by Alex.Davies
    I know a testing system that will find more bugs than all the unit testing, integration testing, and QA you could possibly do. And the chances are you're not using it. It's called your users. It's a cliché that you should test so that you find your bugs rather than your users. Of course you should. But it's also a cliché that no software is ever shipped bug-free. Lost cause? No, opportunity! I think .NET Reflector 6 is pretty stable. In fact I know exactly how stable it is, because some (surprisingly high) proportion of its users tell me every time it crashes: If they press "Send Error Report", I get: And then I fix it. As a rough guess, while a standard stack trace is enough to fix a problem 30% of the time, having all those local variables in the stack trace means I can fix it about 80% of the time. How does this all happen? Did it take ages to code this swish system? Nope, it was one checkbox in SmartAssembly. It adds some clever code to your assembly to capture local variables every time an exception is thrown, and to ask your user to report it to you, with a variety of other useful information. Of course not all bugs show up as exceptions. But if you get used to knowing that SmartAssembly will tell you when an exception happens, you begin to change your coding style. Now, as long as an exception gets thrown in any situation you don't expect, you'll fix it if it ever happens. You'll start throwing exceptions liberally, and stop having to think about whether tiny edge cases are possible, as long as they throw an exception if they happen.

    Read the article

  • Get the MakeUseOf eBook Guide to Hacker Proofing Your PC

    - by ETC
    If you’re interested in checking out a solid overview of PC security best practices and tips, our friends over at MakeUseOf.com have released another free book in their computer-oriented eBook series. The fifty-page ebook HackerProof: Your Guide to PC Security covers a variety of topics including types of malware, operating systems and their inherent vulnerabilities, security best practices, tools for protecting your PC, the importance of security prep and backups, and recovering from malware attacks. It’s a nice and compact text, perfect for brushing up on security best practices for your own machine or sending to friends and relatives that could use a little after-school tutoring on keeping their computer secure and out of trouble. The best tip from the book? The overall message to be cautious and be preemptive in your security efforts is a great meta-tip to take away. Up-to-date definition files and a healthy sense of random links and emails attachments goes a long, long way towards staying safe. HackerProof: Your Guide to PC Security [Direct Link via MakeUseOf] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Get the MakeUseOf eBook Guide to Hacker Proofing Your PC Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client] Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper]

    Read the article

  • How do I make exchange forward email to a non-exchange mail server (postfix on mac)?

    - by johnny
    Hi, I'm not sure what I'm doing. Exchange has been working a long time. I just installed a mac mail server postfix/dovecot. They both work but not together. If I send email directly to the new server with [email protected], I can send email to the Internet and I can receive email at the [email protected]. But, if I log onto my exchange account with OWA and I put in [email protected] I never receive it and I never get any errors back in OWA or on my mac server logs. It just disappears. On the mac, I thought that as long as I had mail running and had SMTP set to allow incoming email checked it would receive email from anywhere. How can I make my Exchange server send email to the mac box if the email doesn't exist in the exchange server? Is this where I would put a new SMTP connector and a bridgehead using DNS? Thank you for any help.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >