Search Results

Search found 2130 results on 86 pages for 'serve u'.

Page 43/86 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • ArchBeat Link-o-Rama for 2012-06-15

    - by Bob Rhubart
    URGENT BULLETIN: Disable JRE Auto-Update for All E-Business Suite End-Users All desktop administrators must IMMEDIATELY disable the Java Runtime Environment (JRE) Auto-Update option for all Windows end-user desktops connecting to Oracle E-Business Suite Release 11i, 12.0, and 12.1. WebLogic JMS / AQ bridge with JBoss AS 7 | Edwin Biemond Oracle ACE Edwin Biemond explains "how you can retrieve JMS messages from JBoss with the help of a WebLogic Foreign Server and how to push messages to JBoss AS with the help of a WebLogic JMS Bridge." The Healthy Tension That Mobility Creates | Hernan Capdevila "Mobile device management in the cloud makes good sense," says Hernan Capdevila. "I don't think IT departments should be hosting device management and managing that complexity. It should be a cloud service." OPN: Fusion Middleware Summer Camps in July in Lisbon and Munich For specialized Oracle Partners. Participation is limited to two people per company at each bootcamp. Registration is first come first serve. Take note of the skill requirements and, prerequisites. Podcast: Cows in the Cloud and the importance of standards In part two of a four-part program Cloud experts Jim Baty, Mark Nelson, William Vambenepe, and Ajay Srivastava explain cows in the cloud and talk about the importance of standards. Community members talk about the challenges and opportunities mobile computing presents for IT architects. Apple has sold 55 million iPads since 2010. Gartner expects a 98% increase in tablet sales in 2012, to 118 million. Nielsen reports that smartphones now account for nearly half of all mobile phones in the U.S., a 38% increase over 2011. And the mobile juggernaut is just getting started. Thought for the Day "Why are video games so much better designed than office software? Because people who design video games love to play video games. People who design office software look forward to doing something else on the weekend." — Ted Nelson Source: SoftwareQuotes.com

    Read the article

  • How to advertise (free) software?

    - by nebukadnezzar
    I'm not sure if this fits on SO, but other SE sites don't seem to fit either, so I understand when this question gets moved, Although I'd like to avoid getting it closed due to being offtopics, since I think that this question might fit, considering this part of the FAQ: Stack Overflow is for professional and enthusiast programmers, ... covers … a specific programming problem ... matters that are unique to the programming profession Sorry for the lengthy Introduction, though. When Software is advertised, it is usually Software for one (or more) specific purpose, such as: Mozilla Firefox - A Web Browser Ubuntu - An Operating System Python - A Programming Language Visual Studio - A Development Studio ... And so on. But when writing Libraries, that is, Software that doesn't necessarily serve one specific purpose, but instead multiple purposes, which are usually supposed to be used inside an application, such as: Irrlicht - A 3D Engine Qt - An Application Framework ... The process of advertisement gets a little more difficult. I'm a developer of the latter kind of Software, and I naturally want to advertise my Software. It's not commercial Software; It's not GPL either. It's completely free (Licensed under the MIT License :-)). I naturally host my stuff at github, which technically makes it very easy to access the software, and I thought that these might be possible options, although I have no experience with them: Submit the Software to Freshmeat, and hope for the best Submit the Software to Sourceforge, and hope someone accidently stumbles over it Write spammails, and get death threats via Mail ... But something tells me that these methods are probably not the best Methods. So, my final question would be, How does the Average Joe Hobby Programmer advertise his/her Software Library?

    Read the article

  • Register Now, Free Webinar! Driving Self-Service Learning with UPK Knowledge Center

    - by Kathryn Lustenberger
    UPK Proficiency Forum  Driving Self-Service Learning with UPK Knowledge Center July 16, at 11 am Pacific Join Oracle University for the next UPK Proficiency Forum on July 16, at 11 am Pacific. Beth Renstrom and Kathryn Lustenberger from UPK Product Management at Oracle will present an exciting session on "Driving Self-Service Learning with UPK Knowledge Center. Knowledge Center is a powerful, web-based knowledge repository that delivers an out-of-the-box deployment method for UPK content, enables extensive tracking and reporting, and can serve as content repository for UPK and non-UPK content. Hear how your organization can use Knowledge Center to centralize both UPK and non-UPK assets to provide self-service, role-based, curriculum-style learning. Understand how Knowledge Center can be used to deploy a collaborative user and expert environment where users can turn knowledge into productivity, ensure on-going user competency, and measure organizational readiness across your organization. You will walk away from this session with a better understanding of Oracle’s User Productivity Professional; Knowledge Center and all the benefits it has to offer your organization. You won’t want to miss this Free seminar! Attendance is limited. Register Now!

    Read the article

  • Design Application to "Actively" Invite Users (pretend they have privileges)

    - by user3086451
    I am designing an application where users message one another privately, and may send messages to any Entity in the database (an Entity may not have a user account yet, it is a professional database). I am not sure how to best design the database and the API to allow messaging unregistered users. The application should remain secure, and data only accessed by those with correct permissions. Messages sent to persons without user accounts serve as an invitation. The invited person should be able to view the message, act on it, and complete the user registration upon receiving an InviteMessage. In simple terms, I have: User misc user fields (email, pw, dateJoined) Entity (large professional dataset): personalDetails... user->User (may be null) UserMessage: sender->User recipient->User dateCreated messageContent, other fields..... InviteMessage: sender->User recipient->Entity expiringUrl inviteeEmail inviteePhone I plan to alert the user when selecting a recipient that is not registered yet, and inform that he may send the message as an invitation by providing email, phone where we can send the invitation. Invitations will have a unique, one-time-use URL, e.g. uuid.uuid4(). When accessed, the invitee will see the InviteMessage and details about completing his/her registration profile. When registration is complete, InviteMessage details to a new instance of UserMessage (to not lose their data), and assign it to the newly created User. The ability to interact with and invite persons who do not yet have accounts is a key feature of the application, and it seems better to separate the invitation from the private, app messages (easier to keep functionality separate, better if data model changes). Is this a reasonable, good design? If not, what would you suggest? Do you have any improvements? Am I correct to choose to create a separate endpoint for creating invitations via the API?

    Read the article

  • SQLAuthority News – Microsoft Whitepaper – AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas

    - by pinaldave
    SQL Server 2012 has many interesting features but the most talked feature is AlwaysOn. Performance tuning is always a hot topic. I see lots of need of the same and lots of business around it. However, many times when people talk about performance tuning they think of it as a either query tuning, performance tuning, or server tuning. All are valid points, but performance tuning expert usually understands the business workload and business logic before making suggestions. For example, if performance tuning expert analysis workload and realize that there are plenty of reports as well read only queries on the server they can for sure consider alternate options for the same. If read only data is not required real time or it can accept the data which is delayed a bit it makes sense to divide the workload. A secondary replica of the original data which can serve all the read only queries and report is a good idea in most of the cases where there is plenty of workload which is not dependent on the real time data. SQL Server 2012 has introduced the feature of AlwaysOn which can very well fit in this scenario and provide a solution in Read-Only Workloads. Microsoft has recently announced a white paper which is based on absolutely the same subject. I recommend it to read for every SQL Enthusiast who is are going to implement a solution to offload read-only workloads to secondary replicas. Download white paper AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • Java EE 7 JSR Submitted

    - by Tori Wieldt
    Java EE 7 has been filed as JSR 342 in the JCP program. This JSR (Java Specification Request) will develop Java EE 7, the next version of the Java Platform, Enterprise Edition. It is an "umbrella JSR" because the specification includes a collection of several other JSRs. The proposal suggests the addition of two new JSRs: Concurrency Utilities for Java EE (JSR-236) and JCache (JSR-107) as well as updates to JPA, JAX-RS, JSF, Servlets, EJB, JSP, EL, JMS, JAX-WS, CDI, Bean Validation, JSR-330, JSR-250, and Java Connector Architecture. There are also two new APIs under discussion: a Java Web Sockets API and a Java JSON API. These are the new JSRs that are currently up for ballot:• JSR 342: Java Platform, Enterprise Edition 7 Specification• JSR 340: Java Servlet 3.1 Specification• JSR 341: Expression Language 3.0• JSR 343: Java Message Service 2.0• JSR 344: JavaServer Faces 2.2All 5 JSRs are now up for Executive Committee voting with ballots closing on 14 March, and slated for inclusion in Java EE  7.  All of these JSRs are also open for Expert Group nominations. Any JCP member can nominate themself to serve on the Expert Groups for these JSRs. Details on how to become a JCP member are on jcp.org. The JCP gives you a chance to have your own work become an official component of the Java platform and to offer suggestions for improving and growing the technology. Either way, everyone in the Java community benefits from your participation.There's a nice discussion about Java EE 7 in this podcast with Java EE spec lead Robert Chinnici and more information in this blog post on the Aquarium. It's exciting to see so much activity currently underway.

    Read the article

  • Google music search: a better way to listen.

    - by anirudha
    somebody who want to listen music  pay much more to some online music store for online listening. otherwise they experience bad or low quality on YouTube. who is illegal  because uploader not have a permission or right to upload the document and their is no guarantee that they not put their ads or quality as same. now forget YouTube and all other because Google music search is much better just go their search the song by movies name or song and just click and listen. the quality is much better then other but it is not Google. the result they put comes from other website. i feel a thing goes wrong in Google music  search  that if i search “sajda” they never show me result about “sadka” because the word in common life use as same both. but the song may be starting from  “sajda” or “sadka”. i thing that they put the link that Do you means “Sadka” when i search sajda that it is better thing just like many online book store show the different keyword related to your keyword when you search their. like you search for a book on online book store they show you some different keyword when they serve the result and show related product or books when you go to a product page. after thinking all it is a better option for user to feel a better quality music without search hassle.

    Read the article

  • Oracle Cloud Solutions @ Cloud Expo East (June 10-12)

    - by Gene Eun
    Oracle is proud to be the Platinum Sponsor at next week's Cloud Expo East (June 10-12) at the Javits Center in New York City.  This is the fourth consecutive year Oracle has sponsored Cloud Expo. As in years past, Oracle has a full schedule of sessions shown below. We'd love to have you be our guest at Cloud Expo East and have you attend one of our sessions and hear more about our thought leadership and leading solutions in the Cloud and Big Data. We'll also have booth #207, so please stop by and see a demo of many of our cloud offerings. Date  Time  Session Title  Track  Room Tuesday, June 10 4:40 pm - 5:15 pm Top 5 Best Practices for your Application Platform As a Service Cloud Business and the API Economy | Deploying the Cloud TBD Wednesday, June 11 9:10 am - 10:10 am Cloud Odyssey:  A Hero’s Quest All Tracks (Keynote) Keynote Hall Wednesday, June 11 10:15 am - 10:45 am Big Data Management System: Smart SWL Processing Across Hadoop and Your Data Warehouse All Tracks (General Session) Keynote Hall Wednesday, June 11 2:50 pm - 3:25 pm Plug into the Cloud: Your Blueprint to Database as a Service Mobile | Hot Topics TBD Wednesday, June 11 2:50 pm - 3:25 pm From Supply-led to Demand-led: Lead Your IT to Better Serve Your Users Cloud Business and the API Economy | Deploying the Cloud TBD Thursday, June 12 2:50 pm - 3:25 pm Reduce Complexity and Accelerate Innovation with IaaS and PaaS Cloud Business and the API Economy | Deploying the Cloud TBD At Cloud Expo East, you'll get to learn about and experience the latest in Cloud and Big Data. If you don't have a pass to Cloud Expo, no problem. Oracle is giving away FREE VIP Gold Passes! We would love to have you attend Cloud Expo on us. Just go to Oracle's Cloud Expo 2014 event registration page and follow the instructions for a complimentary pass. Stay tuned to this blog and follow us on Twitter (@OracleCloudZone) during and after Cloud Expo for more insight and observations about this year's conference.

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • ArchBeat Link-o-Rama for November 20, 2012

    - by Bob Rhubart
    Oracle Utilities Application Framework V4.2.0.0.0 Released | Anthony Shorten Principal Product Manager Anthony Shorten shares an overview of the changes implemented in the new release. Towards Ultra-Reusability for ADF - Adaptive Bindings | Duncan Mills "The task flow mechanism embodies one of the key value propositions of the ADF Framework," says Duncan Mills. "However, what if we could do more? How could we make task flows even more re-usable than they are today?" As you might expect, Duncan has answers for those questions. Oracle BPM Process Accelerators and process excellence | Andrew Richards "Process Accelerators are ready-to-deploy solutions based on best practices to simplify process management requirements," says Capgemini's Andrew Richards. "They are considered to be 'product grade,' meaning they have been designed; engineered, documented and tested by Oracle themselves to a level that they can be deployed as-is for a solution to a problem or extended as appropriate for a particular scenario." Oracle SOA Suite 11g PS 5 introduces BPEL with conditional correlation for aggregation scenarios | Lucas Jellema An extensive, detailed technical post from Oracle ACE Director Lucas Jellema. Check Box Support in ADF Tree Table Different Levels | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis updates last year's "ADF Tree - How to Autoselect/Deselect Checkbox" post with new information. As Boom Lures App Creators, Tough Part Is Making a Living Great New York Times article about mobile app develoment also touches on other significant IT issues. Thought for the Day "Building large applications is still really difficult. Making them serve an organisation well for many years is almost impossible." — Malcolm P. Atkinson Source: SoftwareQuotes.com

    Read the article

  • What happened to the Journal of Game Development?

    - by Ricket
    The lengthy mission statement from its website states: The lack of game-specific research has prevented many in the academic community from embracing game development as a serious field of study. The Journal of Game Development (JOGD), however, provides a much-needed, peer-reviewed, medium of communication and the raison d'etre for serious academic research focused solely on game-related issues. The JOGD provides the vehicle for disseminating research and findings indigenous to the game development industry. It is an outlet for peer-reviewed research that will help validate the work and garner acceptance for the study of game development by the academic community. JOGD will serve both the game development industry and academic community by presenting leading-edge, original research, and theoretical underpinnings that detail the most recent findings in related academic disciplines, hardware, software, and technology that will directly affect the way games are conceived, developed, produced, and delivered. The Journal of Game Development was established in 2003. It's hard to find any information about the issues but at four issues per year, I estimate the last issue was distributed sometime in 2005 or 2006. It had a good editorial board of college professors and a founding editor from Ubisoft. The list of articles looks good. The price was reasonable. So what happened to it? Its website recently went down but you can see the last Archive.org version. The editor-in-chief is a professor at my school so I intend to ask him in person in a week or two, but I thought I'd see what you might be able to dig up about it first. Of course I will be sure to add an answer with his official word on the matter at that time.

    Read the article

  • Why is Samba Access from Windows So Slow?

    - by swalker2001
    I have set up a file server using Ubuntu 12.04 Server. The purpose is to serve several network drives to Windows users that have heretofore been served by numerous NAS drives. I have Samba set up with one share defined so far. I can connect to it fine from my test Windows 7 and Windows XP machines. When I do a directory listing on the share from Windows, it can take up to two minutes to get all the files listed--would have taken about 1.5 seconds when I was using the Buffalo NAS. Sometimes it times out with no response at all. I have used the default smb.conf and simply added the following for the share I have set up so far: [engineering] comment = Ubuntu File Server Share path = /networkdriveshares/engineering browsable = yes guest ok = yes read only = no create mask = 0755 I have tried changing the workgroup setting to the Active Domain name our Windows computer use but didn't notice any difference. The only other change I made to the default smb.conf was adding in the recommended socket settings: SO_RCVBUF=8192 SO_SNDBUF=8192 socket options = TCP_NODELAY Lots of information about slow Samba shares online but I have tried all of the solutions I have found and none have made a lick of difference. If there is no solution, is there a better way to set up a file server to be used by Windows clients?

    Read the article

  • Why does everybody hate SharePoint?

    - by Ryan Michela
    Reading this topic about the most over hyped technologies I noticed that SharePoint is almost universally reviled. My experience with SharePoint (especially the most recent versions) is that it accomplishes it's core competencies smartly. Namely: Centralized document repository - get all those office documents out of email (with versioning) User-editible content creation for internal information disemination - look, an HR site with current phone numbers and the vacation policy Project collaboration - a couple clicks creates a site with a project's documents, task list, simple schedule, threaded discussion, and possibly a list of all project related emails. Very basic business automation - when you fill out the vacation form, an email is sent to HR. My experience is that SharePoint only gets really ugly when an organization tries to push it in a direction it isn't designed for. SharePoint is not a CRM, ERP, bug database or external website. SharePoint is flexible enough to serve in a pinch, but it is no replacement for a dedicated tool. (Microsoft is just as guilty of pushing SharePoint into domains it doesn't belong.) If you use SharePoint for what it's designed for, it really does work. Thoughts?

    Read the article

  • In Scrum, should you split up the backlog in a functional backlog and a technical backlog or not?

    - by Patrick
    In our Scrum teams we use a backlog, which mostly contains functional topics, but also sometimes contains technical topics. The advantage of having 1 backlog is that it becomes easy to choose the topics for the next sprint, but I have some questions: First, to me it seems more logical to have a separate technical backlog, where developers themselves can add pure technical items, like: we could improve performance in this method, this class lacks some technical documentation, ... By having one backlog, all developers always have to pass via the product owner to have their topics added to the backlog, which seems additional, unnecessary work for the product owner. Second, if you have a product owner that only focuses on the pure-functional items, the pure-technical items (like missing technical documentation, code that erodes and should be refactored, classes that always give problems during debugging because they don't have a stable foundation and should be refactored, ...) always end up at the end of the list because "they don't serve the customer directly". By having a separate technical backlog, and time reserved in every sprint for these pure technical items, we can improve the applications functionally, but also keep them healthy inside. What is the best approach? One backlog or two?

    Read the article

  • Multiple sites with the same codebase in Python

    - by Jimmy
    I am trying to run a large amount of sites which share about 90% of their code. They are simply designed to query an API and return the results. They will have a common userbase / database but will be configured slightly different and will have different CSS (perhaps even different templating). My initial idea was to run them as separate applications with a common library but I have read about the sites framework which would allow them to run from a single instance of Django which may help to reduce memory usage. https://docs.djangoproject.com/en/dev/ref/contrib/sites/ Is the site framework the right approach to a problem like this, and does it have real benefits over running separate applications? Initially I thought it was, but now I think otherwise. I have heard the following: Your SITE_ID is set in settings.py, so in order to have multiple sites, you need multiple settings.py configurations, which means multiple distinct processes/instances. You can of course share the code base between them, but each site will need a dedicated worker / WSGIDaemon to serve the site. This effectively removes any benefit of running multiple sites under one hood, if each site needs a UWSGI instance running. Alternative ideas of systems: https://github.com/iivvoo/django_layers https://github.com/shestera/django-multisite I don't know what route to be taking with this.

    Read the article

  • Restful WebAPI VS Regular Controllers

    - by Rohan Büchner
    I'm doing some R&D on what seems like a very confusing topic, I've also read quite a few of the other SO questions, but I feel my question might be unique enough to warrant me asking. We've never developed an app using pure WebAPI. We're trying to write a SPA style app, where the back end is fully decoupled from the front end code Assuming our service does not know anything about who is accessing/consuming it: WebAPI seems like the logical route to serve data, as opposed to using the standard MVC controllers, and serving our data via an action result and converting it to JSON. This to me at least seems like an MC design... which seems odd, and not what MVC was meant for. (look mom... no view) What would be considered normal convention in terms of performing action(y) calls? My sense is that my understanding of WebAPI is incorrect. The way I perceive WebAPI, is that its meant to be used in a CRUD sense, but what if I want to do something like: "InitialiseMonthEndPayment".... Would I need to create a WebAPI controller, called InitialiseMonthEndPaymentController, and then perform a POST... Seems a bit weird, as opposed to a MVC controller where i can just add a new action on the MonthEnd controller called InitialisePayment. Or does this require a mindset shift in terms of design? Any further links on this topic will be really useful, as my fear is we implement something that might be weird an could turn into a coding/maintenance concern later on?

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Prevent Looping and Inefficient Rule Executions by C2B2

    - by JuergenKress
    This recipe, taken from the recently published Oracle SOA Suite 11g Performance Cookbook gives guidance on how to avoid rule executions that will loop, potentially indefinitely! We’ll use an inbound XML fact and a local RL fact as an example. Getting ready You’ll need access to a SOA composite containing an Oracle Business Rules component in JDeveloper to apply this recipe. We’ll assume you have an XSD schema with an input type RequestInput containing input and bonus String types, and output String value called output in a type ResponseOutput. These aren’t efficient but serve as an example. We’ll step through adding a rule to a composite and creating an RL fact. How to do it... Open a SOA composite. Right click on the Project and select Business Rules (Service Components), use the search box if it is not immediately available. Give the rule a name and click the green plus icon to add the RequestInput to the input and ResponseOutput to the output types. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,looping,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Book: DevOps for Developers

    - by Tori Wieldt
    We all know development and operations often act like silos, with "Just throw it over the wall!" being the battle cry. Many organizations unwittingly contribute to gaps between teams, with management by (competing) objectives; a clash of Agile practices vs. more conservative approaches; and teams using different sets of tools, such as Nginx, OpenEJB, and Windows on developers' machines and Apache, Glassfish, and Linux on production machines. At best, you've got sub-optimal collaboration, at worst, you've got the Hatfields and the McCoys.  The book DevOps for Developers helps bridge the gap between development and operations by aligning incentives and sharing approaches for processes and tools. It introduces DevOps as a modern way of bringing development and operations together. It also means to broaden the usage of Agile practices to operations to foster collaboration and streamline the entire software delivery process in a holistic way. Some single aspects of DevOps may not be new, for example, you may have used the tool Puppet for years already, but with a new mindset ("my job is not just to code, it's to serve the customer in the best way possible") and a complete set of recipes, you'll be well on your way to success. DevOps for Developers also by provides real-world use cases (e.g., how to use Kanban or how to release software). It provides a way to be successful in the real development/operations world. DevOps for Developers is written my Michael Hutterman, Java Champion, and founder of the Cologne Java User Group. "With DevOps for Developers, developers can learn to apply patterns to improve collaboration between development and operations as well as recipes for processes and tools to streamline the delivery process," Hutterman explains.

    Read the article

  • Shared to Dedicated or Amazon CloudFront to improve performances and keep secured?

    - by user978548
    I have a Wordpress which currently takes about 1.8s to 2.5s for the home page to completely load in my country. The page weight is about 700Ko (static content included). In order to increase performances, I'm considering two solutions: Switching to a dedicated host. Using amazon s3 cloudfront to serve static contents. My current shared hosting have servers in a neighboring country but not exactly in mine, and both amazon and the dedicated hosting have some, so that's already an advantage. So considering all that, I still have three questions remaining: Currently having a low traffic (100 unique visitors/days, but growing) will it make a huge difference between my shared hosting and a dedicated server ? Knowing that I already use a cookie-less domain to deliver static contents (but using a redirection to the same server), would using amazon s3 make a real difference ? Talking about the cons of dedicated vs amazon s3, if I choose for the dedicated server something like Ubuntu server and do daily package updates and have only port 80 open, would it be sufficient in terms of security (in comparison with my current shared hosting which manage everything for me) ?

    Read the article

  • What are the advantages of programming to under an OS as opposed to bare metal executive?

    - by gby
    Assume you are presented with an embedded system application to program, in C, on a multi-core environment (think a Cavium or Tilera) and need to choose between two environments: Code the application under Linux in SMP mode or code the application under a thin bare metal executive (something like a very minimal RTOS), perhaps with a single core running UP Linux that can serve control tasks. For the purpose of this question, assume that both environment provide the same level of performance guarantees in any measurable aspects of run time performance, including number of meaningful action per second, jitter, latency, real time considerations - the works. (and yes, I realize this is by far not a trivial assumption at all, bare with me). How would you justify going with a Linux SMP based solution rather then a bare metal thin executive solution? The question may seems silly. It certainly seems obvious to me - but I have to convince someone that does not think the same. Could you help make a list of arguments in favor of choosing a real SMP aware OS (Linux) vs. a bare metal executive assuming performance guarantees are NOT an issue? Many thanks

    Read the article

  • Unable to print login-required images in IE

    - by Tim Fountain
    I have some images in a section of a site that require the user to be logged in in order to view. These images are served by a PHP script, which checks the user's login state and if valid, serves the binary data with the appropriate headers. This all works fine. The issue comes when a user tries to print one of these images. In Internet Explorer, when they go to print preview they get the broken image box with a red cross in the corner instead of the actual file. This is what gets printed also. All other browsers can print the images without issue. I have some images elsewhere on the site that are also served via. PHP but these don't require a login. These print fine. The PHP-powered HTML pages on the site that require a login also print fine in IE. It's just login-required images. The user hitting print preview does not seem to result in additional HTTP request to the server for the file. However I do see an additional HTTP request a few seconds later that comes from the same IP (may or may not be related), This request includes no host header, no REQUEST_URI and no user agent. The 'please login' page sends an appropriate 403 header. I've also added a far-in-future expires header to the image response itself to ensure that browsers can serve/print the files from their own cache but this hasn't made any difference. Why can't IE print the images and what else can I do to investigate or fix the problem?

    Read the article

  • https:// search results appearing on Google for purely http:// site

    - by hydrurga
    I started weeding through my site's search results from Google today, using a site: search, to determine if there are any links that cause 404s and thus need redirecting. To my amazement I noticed numerous https:// results relating to various pages. My site doesn't have a SSL certificate, doesn't serve such pages, doesn't internally link to https:// pages, doesn't include any such files in its sitemap.xml and, for all of these, never has. I decided to do a Google search for https://<my site> and found one site that incorrectly refers to the root of my site with a https:// prefix - I will try to contact them to get them to correct this. I'm not sure however how Googlebot managed to index the non-root files as https://. I can't find any external links to them and surely, without certification, Googlebot should have stalled at the first request? I've just added the following lines to the site's .htaccess (although the surfer still has to navigate through the browser's "This site is a security risk. Abandon hope all ye who enter here!" message(s) first to get there): RewriteEngine On RewriteCond %{HTTPS} on RewriteRule ^(.*)$ http://www.<my site>.org/$1 [R=301,L] replacing <my site> with my domain name. My big question is this though - I would like to use the Google Webmaster Tools Remove URLs feature to remove the https:// pages from the index. Can I be guaranteed that this will only remove the https:// versions of each relevant page and not the valid http:// versions? My thanks to anyone who can help me out with this particular question and the issue in general.

    Read the article

  • Inverted LACK Table Serves as a Perfect Gear Rack [DIY]

    - by Jason Fitzpatrick
    We’ve seen IKEA gear hacked to hold audio and computer gear before, but this mod adds in a simple and effective twist. LACK end tables are, conveniently, the same width as a standard server rack. This makes it super simple for DIYers to mount their gear right into the legs of the table with no modification necessary. In this case, however, Winston Smith included a clever update to the mod. Rather than leave it like a standard table, he flipped the table upside down for increased stability and a stronger connection between the legs of his improvised audio rack and the table-top-turned-floor-plate. He then finished it with a matching LACK shelf piece to serve as a turn-table stand. His gear is stored cleanly, off the floor, and in a sturdy container all for about $25–a definite bargain when it comes to storage racks. Hit up the link below for more information and pictures. LACK Rack & EXPEDIT Desktop [IKEA Hackers] HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Effortlessly resize images in Orchard 1.7

    - by Bertrand Le Roy
    I’ve written several times about image resizing in .NET, but never in the context of Orchard. With the imminent release of Orchard 1.7, it’s time to correct this. The new version comes with an extensible media pipeline that enables you to define complex image processing workflows that can automatically resize, change formats or apply watermarks. This is not the subject of this post however. What I want to show here is one of the underlying APIs that enable that feature, and that comes in the form of a new shape. Once you have enabled the media processing feature, a new ResizeMediaUrl shape becomes available from your views. All you have to do is feed it a virtual path and size (and, if you need to override defaults, a few other optional parameters), and it will do all the work for you of creating a unique URL for the resized image, and write that image to disk the first time the shape is rendered: <img src="@Display.ResizeMediaUrl(Path: img, Width: 59)"/> Notice how I only specified a maximum width. The height could of course be specified, but in this case will be automatically determined so that the aspect ratio is preserved. The second time the shape is rendered, the shape will notice that the resized file already exists on disk, and it will serve that directly, so caching is handled automatically and the image can be served almost as fast as the original static one, because it is also a static image. Only the URL generation and checking for the file existence takes time. Here is what the generated thumbnails look like on disk: In the case of those product images, the product page will download 12kB worth of images instead of 1.87MB. The full size images will only be downloaded as needed, if the user clicks on one of the thumbnails to get the full-scale. This is an extremely useful tool to use in your themes to easily render images of the exact right size and thus limit your bandwidth consumption. Mobile users will thank you for that.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >