Search Results

Search found 37426 results on 1498 pages for 'simple talk editorial team'.

Page 444/1498 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • SharePoint 2013 Licensing Simplified

    - by Sahil Malik
    SharePoint 2010 Training: more information Before I begin, let me preface this by saying, I don't work for Microsoft, I don't sell SharePoint, this is merely my understanding of the SharePoint 2013 licensing model. As always, before making any money decisions based on the below, talk to your Microsoft rep. The below is just my understanding, you are responsible for any decision you may take. With that aside, here is how I understand SharePoint 2013 licensing. Note that everything below is for on-prem SharePoint only. Also it goes without saying that you need to purchase windows server and SQL server licenses etc. on top of what you read below. The Basics. You need to buy two things - the SharePoint server, and CALs. SharePoint server comes in SharePoint foundation, standard and enterprise. CALs can be either enterprise or standard, and they can be bought as CALs for SharePoint or a CAL suite which includes exchange and lync. CALs can also be purchased and user CAL or device CAL. Read full article ....

    Read the article

  • Transformation?

    - by Joe G
    I started working at Oracle in 1997.  Since then, we (and most everyone) have been talking about transforming finance operations....but what does that mean exactly?  From my perspective, I thought it meant eliminate waste and menial tasks and giving your finance team more time to work on more strategic things.  That seems logical and simplistic, but how much progress have finance teams (and their IT departments) really made over the past fifteen years? I have yet to talk to a customer that doesn't have one amusing task that makes me chuckle.  Sometimes they still print hard copies of transactions to "file," or sometimes they print 700 pages of data to "analyze," or sometimes they cut and paste from one or more reports into a spreadsheet.  Upon hearing these things, my first question is always, "Why do you do that?" to which their response is rarely the same.    Sometimes it's related to trust (both the employee and the system).  Sometimes, it's habit-based.  And sometimes it is just impossible to accomplish the end result without some manual effort. I will say that I used to print nearly everything that I needed to review.  Partly, because I liked having the ability to scribble notes on the paper, and partly, because it was uncomfortable to read online.  However, I have changed. Rarely do I print anything anymore.  It's easier for me to read and notate online, and well, I guess I've just changed my habits. So where do you think our resistance to change comes from?  Is it truly deficits in our systems, or is it our own personal resistance to change?  What's your most annoying & untransformed task?

    Read the article

  • Tool to know what is making the desktop load longer than usual

    - by Marky
    Is there such a tool? My desktop as of late is taking longer to load than usual. I'd say it takes more than 20 seconds from GDM login until I see the desktop. Aside from disabling all app-indicators and testing it manually one by one, what else should be done? The only indicator I remember activating lately was bluetooth and I have already disabled it from Startup Applications. No improvement. I know of bootchart, but I don't really have a problem with boot. It is only after I login that the issue occurs. I'm on Natty Narwhal. *Updating this thread.... The issue seems to have fixed itself and I did not even do anything. It is really weird. I guess this is how Gnome works (and talk about not recognizing your theme and reverting to Windows 95-like look. How about that?). I have been a long time KDE user and I never encountered issues like this one. The KDE then may have booted into the desktop longer (from KDM) but at least I know it was consistent.

    Read the article

  • How to build a good service layer in ASP.NET?

    - by Swippen
    I have looked through some questions, technologies for building a good service layer but I have some questions regarding this that I need help with. First some information of what I have for requirements. We currently have a number of web applications that talk to each other in a spiderweb looking way (all talking to each other in a confusing way via webservices and database data). We want to change this so that all applications go through a service layer where we can work more with cache and encapsulate common functionality and more. We want this layer to also have a Web API so that 3rd party clients can consume information from the service. The problem I see is that if we build the service layer with say MVC4 Web API don't we need to communicate between the application using the webAPI meaning we have to construct URLs and consume JSON/Xml. That does not sound too effective. I assume a better method would be working with entities and WCF to communicate between the application but then we might loose the Web API magic? So the question is if there is a way to consume a service layer as both a Web API (JSON/XML) and as a more backend service layer with entities. If we are forced to use 2 different service layers we might have to duplicate some functionality and other bad things. Hope the question is clear enough and please ask if you need any more information.

    Read the article

  • How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • How can i compare Audio, what programming language should i use

    - by Pimmetje
    I have 2 audio files that are from almost the same source. But at some points there shifted a bit. Also the codecs does not match. I would like to make a program that takes a sample 2 - 4 seconds. And looks for it in the other file. (Most of the time it's not shifted more than 30 seconds). Than take the time and store it, Go ahead for a few seconds take a sample and find it again. This way i want to create a file where i can see on what points the file is shifted. For people who are more interested in what i want. I have a audio/video file speech and subtitles. But i have same speech from different sources with differs a bit in time. And i like to make a program that can correct the subtitle time for me. Enough about the problem I looked on the Internet for ways to compare audio files. Based on what i read comparing 2 audio files isn't that easy as i had hoped. Some talk about algorithms http://www.perlmonks.org/?node_id=169641 Some audio-library's portaudio.com aubio.org sourceforge.net/projects/ccaudio/ ambiera.com/irrklang/ The biggest problem i have is that i can't find something i can generate from the audio that i can use to compare with. I hope someone here can point me in the right direction.

    Read the article

  • Customized Computer Science Degree - What other field would mesh well with computer science?

    - by sailtheworld
    So here's my situation: I have seven years of experience with web development. I can do PHP, MySQL, OOP, all of that stuff. I would like to make the argument that I have enough technical experience to go out in the real world and get a well-paying, full-time job if I were to drop out right now (I've had a number of job offers recently, and I have already gotten a lot of actual job experience), but I would like to stay in school and get a degree for a number of reasons ranging from the social aspects to the fact that I just want to have a BS in one thing or another as it seems to be important to have one for a lot of jobs, even when it doesn't have anything to do with the job. With that said, it makes little sense for me to major in Computer Science, because that would be like studying everything I already know. I don't want to major in something COMPLETELY different, because that would be contrary to my career goals. I am considering trying to find some interdisciplinary, customized degree of sorts that allows me to combine my current skills with a new education. I'm thinking maybe buisness or even psychology (interface design?). Could I get some ideas for what to major in and tips on who I might talk to? Thanks!

    Read the article

  • Is it too late to start your career as a programmer at the age of 30 ?

    - by Matt
    Assuming one graduated college at 30 years old and has 5 years of experience (no real job experience, just contributing to open source and doing personal projects) with various tools and programming languages, how would he or she be looked upon by hiring managers ? Will it be harder to find a job considering that (I got this information looking at various websites, user profiles on SO and here, etc.) the average person gets hired in this field at around 20 years old. I know that it's never too late to do what you're passionate about and the like but sometimes it is too late to start a career. Is this the case? Managers are always looking for fresh people and I often read job descriptions specifically asking for young people. I don't need answers of encouragement, I know the community here is great and I wouldn't get offended by even the most cold answers. Please don't close this as being too localized, I'm not referring to any specific country or region, talk about the region you're in. I would also appreciate if you justified your answer.

    Read the article

  • What's the best way to use requestAnimationFrame and fixed frame rates

    - by m90
    I recently got into using the HTML5-requestAnimationFrame-API a lot on animation-heavy websites, especially after seeing the Jank Busters talk. This seems to work pretty well and really improve performance in many cases. Yet one question still persists for me: When wanting to use an animation that is NOT entirely calculated (think spritesheets for example) you will have to aim for a fixed frame rate. Of course one could go back to use setInterval again, but maybe there are other ways to tackle this. The two ways I could think of using requestAnimationFrame with a fixed frame rate are: var fps = 25; //frames per second function animate(){ //actual drawing goes here setTimeout(function(){ requestAnimationFrame(animate); }, 1000 / fps) } animate(); or var fps = 25; //frames per second var lastExecution = new Date().getTime(); function animate(){ var now = new Date().getTime(); if ((now - lastExecution) > (1000 / fps)){ //do actual drawing lastExecution = new Date().getTime(); } requestAnimationFrame(animate); } animate(); Personally, I'd opt for the second option (the first one feels like cheating), yet it seems to be more buggy in certain situations. Is this approach really worth it (especially at low frame rates like 12.5)? Are there things to be improved? Is there another way to tackle this?

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 1

    - by Michelle Kimihira
    Author: Moazzam Chaudry This year we recognized 29 customers for their innovative use of Oracle Fusion Middleware and their significant results. The winners were selected across 8 product categories from 11 countries spanning diverse industries around the world. This is a two-part blog series. The 2012 Fusion Middleware Innovation Awards winners were announced at OOW on October 2nd by Hasan Rizvi (EVP Fusion Middleware and Java development), Amit Zavery (VP Product Management) and Ed Zou (VP Product Management) to an audience that included press, analysts and customers. Winners were selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The program is in its 6th year and this year, we are excited to have received over 250 submissions from customers around the globe. The winners were selected by a panel of internal and external judges; it was a difficult time selecting this year's most innovative projects. Judges scored each entry across multiple scoring categories. This year, winning use cases for Fusion Middleware include: Improve customer experience by monitoring real-time and simplifying user experience of tens of millions of customer Drive social enagement through social media channels in fields, including healthcare, harness big data by analyzing and improving visibility across 60M+customers and hundreds of terabytes of data Enable mobile adoption by delivering mobile news experience to 50% of the Australian population, embrace cloud computing by delivering hospitality services to 3000+ hotels and monitoring services to hospitals, and optimize criticial processes such as, remarketing cars through tens of thousands of dealers On Monday's blog, we will talk about the winners in each category and what customers had to say in the customer panel. Congratulations to the 2012 Oracle Fusion Innovation Award winners:  

    Read the article

  • Recommended display/background brightness ratio and UI color schemes [duplicate]

    - by user1306322
    This question already has an answer here: Colour scheme for editor - guidelines or medical reccomendations 3 answers I'm a professional programmer, which means I spend a lot of time staring at various displays. Recently I've been having some problems with my eyes, so I went to talk to several doctors, which all gave me different recommendations as to how bright the background of the room should be in comparison to the display's brightness. It was very confusing, as some of them even agreed with counter-arguments of others, which made it all even less clear. So I'd like to ask the professional programmers, as people who actually have some experience with that. Some of the doctors said that looking at a monitor is like looking at a book, so the brightness ratios should be approximately the same. Others said that background should be as bright as the display itself, because then there is no brightness difference at the edges, and that's what may cause eye fatigue. From my own experience, I can say that reading a book isn't the same as writing or debugging a program, where you have to pay close attention to each symbol, and in books most words are easily recognizable without focusing too hard on them. Also, books are black on white and I myself use the default (dark text, white bg) color scheme for my IDE, but I've seen some programmers use mid-bright text on very dark background color schemes. So I'd like to ask what are the recommended display/background brightness ratios for programming? I'm not sure this site is the right one for this kind of questions, so if you know a better one, please comment.

    Read the article

  • Upon Reflection

    - by foxjazz
    During my tenure at the last company, I didn't let my career stagnate as others have and as time moved along.When at work or home, spend 10% of your time learning something new about some aspect or segway of your job so that your skills are marketable in case you lose it. From experience let me reinforce that it pays off. It pays off in your current job because of the education received and the competence increase of your skills which applied will bring recognition.In these days and times, loyalty to a company is truly at an end. However many companies do care about cultivating their employees which creates a brand of loyalty that can't be replaced. Old companies with the Corp. mentality (or because of the corp. mentality) ever decrease their budgets on organizational sections and thereby do a RIF as a matter of business.The mistakes they make during this process can be risky. But who am I, but a lowly ole programmer, to judge risk. If you are laid off, be friendly with your past manager, and based on simple questions and help, give whatever help you can over the phone even though you are under no obligation to do so.It is also quite possible that there are opportunities to make at home with a new company in the future. Just remember that when inquiring about a position, take advantage of the training that is offered, and keep yourself emotionally and educationally fit.Talk soon,foxjazz

    Read the article

  • JCP 2.9 and Transparency Call for Spec Leads 9 November

    - by heathervc
    JCP Spec Leads are invited to participate in an online meeting/call this Friday, 9 November, to hear a talk about the the 2.9 version of the Java Community Process (effective date of 13 November) and discuss the changes with representatives of the Program Management Office.  This call will be recorded and published with materials for those not able to attend.  Details of the call are included below.JCP 2.9 is presented in two documents:The JCP 2.9 document:http://www.jcp.org/en/procedures/jcp2and the EC Standing Rules document:http://www.jcp.org/en/procedures/ec_standing_rulesIn addition, we will be reviewing ways to collect community feedback on the transparency requirements for JCP 2.7 and above JSRs (JCP 2.8, JCP 2.9), detailed as part of the Spec Lead Guide.Call details:Topic: JCP 2.9 and Transparency Date: Friday, November 9, 2012 Time: 9:00 am, Pacific Standard Time (San Francisco, GMT-08:00) Meeting Number: 800 623 574 Meeting Password: 5282 ------------------------------------------------------- To start or join the online meeting ------------------------------------------------------- Go to https://jcp.webex.com/jcp/j.php?ED=188925347&UID=491098062&PW=NMDZiYTQzZmE1&RT=MiM0 ------------------------------------------------------- Audio conference information ------------------------------------------------------- Toll-Free Dial-In Number:     866 682-4770 International (Toll) Dial-In Number:     408 774-4073 Conference code 9454597 Security code 1020 Outside the US: global access numbers   https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=6279803

    Read the article

  • Pros and Cons of Facebook's React vs. Web Components (Polymer)

    - by CletusW
    What are the main benefits of Facebook's React over the upcoming Web Components spec and vice versa (or perhaps a more apples-to-apples comparison would be to Google's Polymer library)? According to this JSConf EU talk and the React homepage, the main benefits of React are: Decoupling and increased cohesion using a component model Abstraction, Composition and Expressivity Virtual DOM & Synthetic events (which basically means they completely re-implemented the DOM and its event system) Enables modern HTML5 event stuff on IE 8 Server-side rendering Testability Bindings to SVG, VML, and <canvas> Almost everything mentioned is being integrated into browsers natively through Web Components except this virtual DOM concept (obviously). I can see how the virtual DOM and synthetic events can be beneficial today to support old browsers, but isn't throwing away a huge chunk of native browser code kind of like shooting yourself in the foot in the long term? As far as modern browsers are concerned, isn't that a lot of unnecessary overhead/reinventing of the wheel? Here are some things I think React is missing that Web Components will care of. Correct me if I'm wrong. Native browser support (read "guaranteed to be faster") Write script in a scripting language, write styles in a styling language, write markup in a markup language. Style encapsulation using Shadow DOM React instead has this, which requires writing CSS in JavaScript. Not pretty. Two-way binding

    Read the article

  • What service or software should I use to serve advertising on a site with about 120k monthly page views?

    - by JasonBirch
    I have a site that is generating about 120k monthly page views and is being hosted on a shared FreeBSD server where I have access to PHP and MySQL. I am using some custom PHP server-side scripts that give each of my ad networks (AdSense, Tribal Fusion, etc) an adjustable percentage of impressions in each of the ad positions on my pages. I am looking for a better way of managing and measuring the delivery of these ads, and would also like to be able to take direct placements and provide statistics to the clients. I am looking at options including OpenX self-hosted, OpenX community, and Google DoubleClick for Publishers Small Business (DFP), but am having difficulty determining which one will best meet my needs. They all seem to have pretty steep learning curves compared to my simple scripts. What I have taken away so far as the benefit of self-hosting is that I don't have to pay for the service if I exceed a maximum number of ad impressions, while both OpenX Community and DFP have free impression limits. Of course, if I was doing those kind of numbers I'd need to upgrade my hosting account, but I'm not sure even at that point whether it would be cheaper to serve the ads myself than pay for a premium service. Apart from this, I really need insights into what features differentiate these services, why I might want to choose one over another, and if there are any other competing products or service of the same quality that I should look into. Answers from webmasters who have used both (or all three) services and can talk to usability and ease of ad management would be highly appreciated.

    Read the article

  • Software Architecture: How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • Mobile (Client) to Amazon S3 (Server) - Architecture

    - by wasabii
    let's start off with the problem statement: My iOS application has a login form. When the user logs in, a call is made to my API and access granted or denied. If access was granted, I want the user to be able to upload pictures to his account and/or manage them. As storage I've picked Amazon S3, and I figured it'd be a good idea to have one bucket called "myappphotos" for instance, which contains lots of folders. The folder names are hashes of a user's email and a secret key. So, every user has his own, unique folder in my Amazon S3 bucket. Since I've just recently started working with AWS, here's my question: What are the best practices for setting up a system like this? I want the user to be able to upload pictures directly to Amazon S3, but of course I cannot hard-code the access key. So I need my API to somehow talk to Amazon and request an access token of sorts - only for the particular folder that belongs to the user I'm making the request for. Can anyone help me out and/or guide me to some sources where a similar problem was addressed? Don't think I'm the first one and the amazon documentation is so extensive that I don't really know where to start looking. Thanks a lot!

    Read the article

  • How do you coordinate with co-workers to give a balanced interview?

    - by goldierox
    My company has been conducting a lot of interviews lately for candidates with various experience levels, ranging from interns to senior candidates. We put our candidates through five 45 minute interview sessions where we try to ask a range of questions. One person always asks the same questions that test logic and communication. The rest typically split time between a whiteboard coding question and a discussion of previous projects, technologies the interviewee has worked with, and what he/she is looking for a job. Generally, we know the range of questions that other people on the loop will ask. Sometimes we switch things up and end up having redundancies. Today, 3 interviewers asked tree-related questions. Other times, we've all honed in on the same project on a resume and have had the interviewee talk about it with everyone. I think a smooth interview process would help us learn more about the candidate while giving the impression to the candidate that we have our act together as a team. How do you coordinate with others in the interview loop to give a balanced interview?

    Read the article

  • GlassFish/Java EE Community Open Forum Tomorrow!

    - by reza_rahman
    Still have lingering questions on the goals and future of GlassFish? Want to know a little more about the upcoming GlassFish 4.0.1 release? Something on your mind about Java EE 8/GlassFish 5? You have a golden opportunity to pose your questions and speak your mind tomorrow! The good folks over at C2B2 have gone through a lot of time and effort to organize a very useful online event for the London GlassFish User Group - they are having me answer all your questions online, in real time, "face-to-face". Steve Millidge of C2B2 will be moderating the questions and joining the conversation. Did I mention the event was online, free and open to anyone? The event is tomorrow (May 30th), so make sure to register as soon as possible through the C2B2 website (the registration page has more details on the event). It will be held at 4:30 PM BST / 11:30 AM EST / 8:30 AM PST - you must register to participate. Hope to talk to you tomorrow?

    Read the article

  • Series On Embedded Development (Part 1)

    - by user12612705
    This is the first in a series of entries on developing applications for the embedded environment. Most of this information is relevant to any type of embedded development (and even for desktop and server too), not just Java. This information is based on a talk Hinkmond Wong and I gave at JavaOne 2012 entitled Reducing Dynamic Memory in Java Embedded Applications. One thing to remember when developing embeddded applications is that memory matters. Yes, memory matters in desktop and server environments as well, but there's just plain less of it in embedded devices. So I'm going to be talking about saving this precious resource as well as another precious resource, CPU cycles...and a bit about power too. CPU matters too, and again, in embedded devices, there's just plain less of it. What you'll find, no surprise, is that there's a trade-off between performance and memory. To get better performance, you need to use more memory, and to save more memory, you need to need to use more CPU cycles. I'll be discussing three Memory Reduction Categories: - Optionality, both build-time and runtime. Optionality is about providing options so you can get rid of the stuff you don't need and include the stuff you do need. - Tunability, which is about providing options so you can tune your application by trading performance for size, and vice-versa. - Efficiency, which is about balancing size savings with performance.

    Read the article

  • Can't get my graphics driver (GMA 3150) to work

    - by bracus
    I've been searching like crazy trying to find a fix for this, it's the only thing that's not completely working on my setup. I see posts where people say it should be working but it just isn't. I have a Gateway LT2802u netbook and I installed 11.10 on this 2 days ago. Everything works except for accelerated graphics. At first I couldn't watch a simple flash video at all, but somehow I got it to work. Now the last problem I have is I can't watch HD videos, my screen resolution won't go higher than 1024x600, and my under my graphics driver it says "Unknown". After doing as much research as possible, I've come to the conclusion that it's the GMA 3150 graphics driver. There is a bunch of talk on it all over the interwebs but nothing lately. I've tried the fixes that some people have used but most when I try to get the package it's no longer there or available if that makes sense. I'm loving everything Ubuntu has to offer but it'll really bite if I can't use it any more because of this problem. Does anybody have any ideas? You'd really be helping a lot.

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • Postfix on Snow Leopard unable to send MIME emails, including header contents in message body

    - by devvy
    I configured postfix on snow leopard by adding the following line to /etc/hostconfig: MAILSERVER=-YES- I then configured postfix to relay through my ISP's SMTP server. I added the following two lines in their respective places within /etc/postfix/main.cf: myhostname = 1and1.com relayhost = shawmail.vc.shawcable.net I then have a simple PHP mail function wrapper as follows: send_email("[email protected]", "[email protected]", "Test Email", "<p>This is a simple HTML email</p>"); echo "Done"; function send_email($from,$to,$subject,$message){ $header="From: <".$from."> "; $header.= 'MIME-Version: 1.0' . " "; $header.= 'Content-type: text/html; charset=iso-8859-1' . " "; $send_mail=mail($to,$subject,$message,$header); if(!$send_mail){ echo "ERROR"; } } With this, I am receiving an e-mail that appears to be improperly formatted. The message header is showing up in the body of the e-mail. The raw message content is as follows: Return-Path: <[email protected]> Delivery-Date: Tue, 27 Apr 2010 18:12:48 -0400 Received: from idcmail-mo2no.shaw.ca (idcmail-mo2no.shaw.ca [64.59.134.9]) by mx.perfora.net (node=mxus2) with ESMTP (Nemesis) id 0M4XlU-1NCtC81GVY-00z5UN for [email protected]; Tue, 27 Apr 2010 18:12:48 -0400 Message-Id: <[email protected]> Received: from pd6ml3no-ssvc.prod.shaw.ca ([10.0.153.149]) by pd6mo1no-svcs.prod.shaw.ca with ESMTP; 27 Apr 2010 16:12:47 -0600 X-Cloudmark-SP-Filtered: true X-Cloudmark-SP-Result: v=1.0 c=1 a=VphdPIyG4kEA:10 a=hATtCjKilyj9ZF5m5A62ag==:17 a=mC_jT1gcAAAA:8 a=QLyc3QejAAAA:8 a=DGW4GvdtALggLTu6w9AA:9 a=KbDtEDGyCi7QHcNhDYYwsF92SU8A:4 a=uch7kV7NfGgA:10 a=5ZEL1eDBWGAA:10 Received: from unknown (HELO 1and1.com) ([24.84.196.104]) by pd6ml3no-dmz.prod.shaw.ca with ESMTP; 27 Apr 2010 16:12:48 -0600 Received: by 1and1.com (Postfix, from userid 70) id BB08D14ECFC; Tue, 27 Apr 2010 15:12:47 -0700 (PDT) To: [email protected] Subject: Test Email X-PHP-Originating-Script: 501:test.php Date: Tue, 27 Apr 2010 18:12:48 -0400 X-UI-Junk: AutoMaybeJunk +30 (SPA); V01:LYI2BGRt:7TwGx5jxe8cylj5nOTae9JQXYqoWvG2w4ZSfwYCXmHCH/5vVNCE fRD7wNNM86txwLDTO522ZNxyNHhvJUK9d2buMQuAUCMoea2jJHaDdtRgkGxNSkO2 v6svm0LsZikLMqRErHtBCYEWIgxp2bl0W3oA3nIbtfp3li0kta27g/ZjoXcgz5Sw B8lEqWBqKWMSta1mCM+XD/RbWVsjr+LqTKg== Envelope-To: [email protected] From: <[email protected]> MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 Message-Id: <[email protected]> Date: Tue, 27 Apr 2010 15:12:47 -0700 (PDT) <p>This is a simple HTML email</p> And here are the contents of my /var/log/mail.log file after sending the email: Apr 27 15:29:01 User-iMac postfix/qmgr[705]: 74B1514EDDF: removed Apr 27 15:29:30 User-iMac postfix/pickup[704]: 25FBC14EDF0: uid=70 from=<_www> Apr 27 15:29:30 User-iMac postfix/master[758]: fatal: open lock file pid/master.pid: unable to set exclusive lock: Resource temporarily unavailable Apr 27 15:29:30 User-iMac postfix/cleanup[745]: 25FBC14EDF0: message-id=<[email protected]> Apr 27 15:29:30 User-iMac postfix/qmgr[705]: 25FBC14EDF0: from=<[email protected]>, size=423, nrcpt=1 (queue active) Apr 27 15:29:30 User-iMac postfix/smtp[747]: 25FBC14EDF0: to=<[email protected]>, relay=shawmail.vc.shawcable.net[64.59.128.135]:25, delay=0.21, delays=0.01/0/0.1/0.1, dsn=2.0.0, status=sent (250 ok: Message 25784419 accepted) Apr 27 15:29:30 User-iMac postfix/qmgr[705]: 25FBC14EDF0: removed Two other people in the office have followed the exact same process and are running the exact same script, version of snow leopard, php, etc. and everything is working fine for them. I've even copied their config files to my machine, restarted postfix, restarted apache, all to no avail. Does anyone know what steps I could take to resolve the issue? This is boggling my mind... Thanks

    Read the article

  • Product Review: qlWebDS Pro

    There are many products available for creating directory style web sites, but web masters prefer simple ones that contain features relevant to them. In this review, Anand puts the Pro version of qlWebDS software to the test. He examines the various features and provides suggestions for improving the quality of the product.

    Read the article

  • Finding a Relative Path in .NET

    - by Rick Strahl
    Here’s a nice and simple path utility that I’ve needed in a number of applications: I need to find a relative path based on a base path. So if I’m working in a folder called c:\temp\templates\ and I want to find a relative path for c:\temp\templates\subdir\test.txt I want to receive back subdir\test.txt. Or if I pass c:\ I want to get back ..\..\ – in other words always return a non-hardcoded path based on some other known directory. I’ve had a routine in my library that does this via some lengthy string parsing routines, but ran into some Uri processing today that made me realize that this code could be greatly simplified by using the System.Uri class instead. Here’s the simple static method: /// <summary> /// Returns a relative path string from a full path based on a base path /// provided. /// </summary> /// <param name="fullPath">The path to convert. Can be either a file or a directory</param> /// <param name="basePath">The base path on which relative processing is based. Should be a directory.</param> /// <returns> /// String of the relative path. /// /// Examples of returned values: /// test.txt, ..\test.txt, ..\..\..\test.txt, ., .., subdir\test.txt /// </returns> public static string GetRelativePath(string fullPath, string basePath ) { // ForceBasePath to a path if (!basePath.EndsWith("\\")) basePath += "\\"; Uri baseUri = new Uri(basePath); Uri fullUri = new Uri(fullPath); Uri relativeUri = baseUri.MakeRelativeUri(fullUri); // Uri's use forward slashes so convert back to backward slashes return relativeUri.ToString().Replace("/", "\\"); } You can then call it like this: string relPath = FileUtils.GetRelativePath("c:\temp\templates","c:\temp\templates\subdir\test.txt") It’s not exactly rocket science but it’s useful in many scenarios where you’re working with files based on an application base directory. Right now I’m working on a templating solution (using the Razor Engine) where templates live in a base directory and are supplied as relative paths to that base directory. Resolving these relative paths both ways is important in order to properly check for existance of files and their change status in this case. Not the kind of thing you use every day, but useful to remember.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >