Search Results

Search found 2481 results on 100 pages for 'medium trust'.

Page 66/100 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Evaluating Scrum - is it okay to have people with multiple roles in a Scrum team?

    - by Wayne M
    I'm evaluating some Agile-style methodologies for possible introduction to my team. With Scrum, is it allowable to have the same person perform multiple roles? We have a small team of four developers and a web designer; we don't really have a lead (I fulfill this role), QA testers or business analysts, and all of our development tasks come from the CIO. Automated testing is seen as a total waste of time, and everything focuses on speed and not quality. What will happen is the CIO will come up with a development task (whether a feature or a bug) and give it to a developer (not to the whole team, to an individual, often in private or out of the blue) who is then expected to get it completed. The CIO doesn't gather requirements beyond the initial idea (and this has bitten us before as we'll implement something only to find out that none of the end users can use the feature, because they weren't consulted or even informed about it before we developed it, and in a panic we'll be told to revert the change) but requires say in/approval of everything that we do. First things first, is a Scrum style something to consider to introduce some standards and practices? From reading, Scrum seems to rely on a bit more trust and communication and focuses more on project management than on development, which is something we are completely devoid of as we don't have any semblance of project management at present. Second, if it can work is it unreasonable for someone, let's say myself, to act as both ScrumMaster and a developer? Or for a developer to also be the Product Owner (although chances are this will be the CIO, who isn't a developer)? I realize the Scrum Master and the Product Owner should be different people but at the same time I don't think we have anyone who has the qualities of a Product Owner (chances are it would turn into a "I need all these stories, I don't care how but get it done" type of deal and/or any freeze would be unfrozen on a whim). It seems to me that I might need to pick and choose pieces of Scrum/XP/Lean to compensate for how things are done currently, as it's highly unlikely that the mentality can be changed; for instance Pair Programming would never fly (seen as a waste, you get half the tasks done if you need two people for everything), TDD would be a hard sell, but short cycles would be welcomed.

    Read the article

  • Create Outlook Appointments from PowerShell

    - by BuckWoody
    I've been toying around with a script to create a special set of calendar objects in Outlook that show when my SQL Server Agent Jobs are scheduled to run. I haven't finished yet, but I thought I would share the part that creates the Outlook Appointments.I have yet to fill a variable with the start and end times, and then loop through that to create the appointments. I'm thinking I'll make the script below into a function, and feed it those variables in a loop. The script below creates a whole new Calendar Folder in Outlook called "SQL Server Agent Jobs". I also use categories quite a bit, so you'll see that too. Caution: If you plan to play with this script, do it on an isolated workstation, not on your "regular" Outlook calendar. Otherwise, you'll have lots of appointments in there that you don't care about!  # Add a new calendar item to a new Outlook folder called "SQL Server Agent Jobs" $outlook = new-object -com Outlook.Application $calendar = $outlook.Session.folders.Item(1).Folders.Item("SQL Server Agent Jobs") $appt = $calendar.Items.Add(1) # == olAppointmentItem $appt.Start = [datetime]"03/11/2010 11:00" $appt.End = [datetime]"03/11/2009 12:00" $appt.Subject = "JobName" $appt.Location = "ServerName" $appt.Body = "Job Details" $appt.Categories = "SQL server Agent Job" $appt.Save()   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The value of money

    - by ambreesh
    A dictionary definition of money is "any circulating medium of exchange, including coins, paper money, anddemand deposits". If you ask an economist for a definition of money, you will be introduced to terms like M1, M2, M3, all of which denote tangible assets - currency, and anything that is liquid enough to be used as currency; checks, stamps and now mobile minutes being examples. The macroeconomic theory of money is fascinating - the effect of money supply on exchange rates and interest rates, the concept of the "money multiplier" (if I deposit $10 into a bank, the bank will likely loan $8 of it to someone else, who will then give it to someone else in exchange for goods and services, who will then likely deposit it again, which will result in the bank loaning it again and so on - making that $10 of money supply worth a lot more ($10+$8+$x+...)).  But all this depends on money supply - in other words, money that is printed by the mint. The Treasury Department spends a lot of time figuring out how much money to print, there is lot being written on QE2 now-a-days, which is intended to increase the money supply. Money is used to purchase goods and services, and yes it is saved too but that is so one can purchase goods and services later. Completely unrelated, there is a sea change occurring in the web world, dominated by, I believe, Facebook. With 500M active users and growing, FB has the ability to introduce a "money supply" which is completely unrelated to today's "money". Using today's money, a FB user can buy a certain number of FB$s, and then use the FB$s within FB to purchase goods and services - with the money multiplier kicking in. I remember talking with a colleague about this a few years ago, the true way to monetize the web is to introduce an alternative system to the existing, and FB has the ability to do just that. There is enough momentum, enough mass for FB to start to monetize its user base. And completely screw up the economists at the Treasury, not to mention disintermediating the banks completely. The only other ubiquitous asset is mobile minutes. People exchanging mobile minutes for tangible goods and services happens today, the big difference however is the demographic. While Safaricom offers this ability in Kenya today, FB has the 15-40 year middle class user as their user. And the next generation is growing up with FB as a standard channel for communicating with their peers. Virtual flowers when going in for the kill? If your target is an avid FB user, why not? It certainly is a lot more green - no pun intended!

    Read the article

  • 2011 - ALMs for your development team and the people they work with.

    - by David V. Corbin
    Welcome to 2011, it is already shaping up to be a very exciting year. The title of the post is not about charitable giving, although that is also a great topic. Application Lifecycle Management and the Systems that support the environment is, and 2011 will be a year where I expect many teams to invest heavily in this area. For those not familiar with ALM, it can be simplified down to "A comprehensive view of all of the iteas, requirements, activities and artifacts that impact an application over the course of its lifecycle, from concept until decommissioning". Obviously, this encompases a large number of different areas even for relatively small and medium sized projects. In recent years, many teams have adapted methodoligies which address individual aspects of this; but the majority of this adoption has resulted in "islands of improvement" rather than the desired comprehensive outcome...Until now! Last year Microsoft released Team Foundation Server 2010 along with Visual Studio 2010 Ultimate Edition, and with these two in combination the situation has drastically changed. At last there is a single environment that is capable of handling all aspects of ALM, and is also capable of dealing with migration and integration with existing systems to make the transition to a single solution much easier. Thse possibilities (and practicalities) are nothing short of amazing, Architecture thru Testing integration? YES. Being able to correlate specific requirement items (and their history) to actual code (and code history)? YES. Identification of which tests will be potentially impacted by a given code change? YES. Resiliant Automated Testing of User Interfaces? YES. Automatic Deployment Management? YES. Integraton Level testing as part of (designated) Builds? YES. I could easily double or triple the above list, but these items should be enough to get you thinking about the "pain points" your team and organization currently face and the fact that there IS a way to relieve the pain. Over the course of the year, I am hoping to bring together some of the "best of breed" information, along with hosting (and participating in) discussions with various experts in the field. There are already a number of groups (including many on LinkedIn) that have an ALM focus, and I encourage everyone out to check them out. I will be posting a list of the ones I find most helpful in the not too distant future. As I said at the beginning, 2011 is shaping up to be a very interesting (and productive) year. Why wait to start investigating and adopting ALM? ps: For those interested in becoming an "Alms Giver" in the charitable sense, I highly recommend checking out GiveCamp. A group of developers, designers and others get together to create a solution for a charity in just under 48 hours. I will be attending the GiveCamp in New York City on Jan 14-16, more information is available at nycgivecamp.org/

    Read the article

  • In a multidisciplinary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • Token based Authentication and Claims for Restful Services

    - by Your DisplayName here!
    WIF as it exists today is optimized for web applications (passive/WS-Federation) and SOAP based services (active/WS-Trust). While there is limited support for WCF WebServiceHost based services (for standard credential types like Windows and Basic), there is no ready to use plumbing for RESTful services that do authentication based on tokens. This is not an oversight from the WIF team, but the REST services security world is currently rapidly changing – and that’s by design. There are a number of intermediate solutions, emerging protocols and token types, as well as some already deprecated ones. So it didn’t make sense to bake that into the core feature set of WIF. But after all, the F in WIF stands for Foundation. So just like the WIF APIs integrate tokens and claims into other hosts, this is also (easily) possible with RESTful services. Here’s how. HTTP Services and Authentication Unlike SOAP services, in the REST world there is no (over) specified security framework like WS-Security. Instead standard HTTP means are used to transmit credentials and SSL is used to secure the transport and data in transit. For most cases the HTTP Authorize header is used to transmit the security token (this can be as simple as a username/password up to issued tokens of some sort). The Authorize header consists of the actual credential (consider this opaque from a transport perspective) as well as a scheme. The scheme is some string that gives the service a hint what type of credential was used (e.g. Basic for basic authentication credentials). HTTP also includes a way to advertise the right credential type back to the client, for this the WWW-Authenticate response header is used. So for token based authentication, the service would simply need to read the incoming Authorization header, extract the token, parse and validate it. After the token has been validated, you also typically want some sort of client identity representation based on the incoming token. This is regardless of how technology-wise the actual service was built. In ASP.NET (MVC) you could use an HttpModule or an ActionFilter. In (todays) WCF, you would use the ServiceAuthorizationManager infrastructure. The nice thing about using WCF’ native extensibility points is that you get self-hosting for free. This is where WIF comes into play. WIF has ready to use infrastructure built-in that just need to be plugged into the corresponding hosting environment: Representation of identity based on claims. This is a very natural way of translating a security token (and again I mean this in the widest sense – could be also a username/password) into something our applications can work with. Infrastructure to convert tokens into claims (called security token handler) Claims transformation Claims-based authorization So much for the theory. In the next post I will show you how to implement that for WCF – including full source code and samples. (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • PowerShell – Show a Notification Balloon

    - by BuckWoody
    In my presentations for PowerShell I sometimes want to start a process (like a backup) that will take some time. I normally pop up a notification “balloon” at the start, and then do the bulk of the work, and then pop up a balloon at the end to let me know it’s done. You can actually try out this little sample (on a test system, of course) without any other code to see what it does. Then just put the other PowerShell commands in the #Do Some Work part. Oh – throw an icon (.ico file) in a c:\temp directory or point that somewhere else. (No, this probably isn’t original. Can’t remember where I saw the original code, but I’ve modified it a bit anyway, so if you’re the original author and this looks slightly familiar, post a comment.) [void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") $objBalloon = New-Object System.Windows.Forms.NotifyIcon $objBalloon.Icon = "C:\temp\Folder.ico" # You can use the value Info, Warning, Error $objBalloon.BalloonTipIcon = "Info" # Put what you want to say here for the Start of the process $objBalloon.BalloonTipTitle = "Begin Title" $objBalloon.BalloonTipText = "Begin Message" $objBalloon.Visible = $True $objBalloon.ShowBalloonTip(10000) # Do some work # Put what you want to say here for the completion of the process $objBalloon.BalloonTipTitle = "End Title" $objBalloon.BalloonTipText = "End Message" $objBalloon.Visible = $True $objBalloon.ShowBalloonTip(10000) Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Five Best Practices for Going Mobile

    - by kellsey.ruppel
    76% of IT decision makers indicate mobile trends will have a high to extremely high impact on their organization. Has your organization gone mobile? Looking for some ideas on how to get started? John Brunswick shares his Best Practices for Going Mobile. Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables social networking, decision support, purchasing, content consumption, and location-based searching, extending experiences beyond what is available in traditional desktop computing.  Organizations rushing to ensure their brand's mobile availability may have taken a tactical approach to implementation, but strategically approaching mobile can enable greater returns on a similar investment and subsequent mobile projects. Here are some strategic considerations for delivering products, services, and information to mobile constituents.  Who, Why, and What? Ask yourself these key questions: who are you attempting to engage through the channel, and why are they engaging you through this channel? What experience will satisfy their needs? What outcome will support your core business? Will you be informing and/or transacting with this person?  Mobile Behavior. Mobile users generally engage for a very specific purpose. Ensure that access to information, services, and products is streamlined. Arriving on a mobile site through search only to be asked to search again frustrates users.  Mobile Is Broad. After establishing the audience and goal, review technology requirements to support them. Do you need a mobile Website, native mobile application, or both? Do you need to support multiple devices? Know the difference between native mobile and mobile Web.  Social Strategy. Users are more likely to trust reviews from peers than marketing information from a vendor. If you are selling products or services, be sure to make social integration part of your strategy.  Content Management. Consider a shared content platform strategy for Web and mobile projects. Fresh, consistent content is important for high-quality experiences. Read more from John Brunswick.We'll also be talking mobile strategies and how you can transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels in a webcast tomorrow. We hope you'll join us!

    Read the article

  • Software monetization that is not evil

    - by t0x1n
    I have a free open-source project with around 800K downloads to date. I've been contacted by some monetization companies from time to time and turned them down, since I didn't want toolbar malware associated with my software. I was wondering however, is there a non-evil way to monetize software ? Here are the options as I know them: Add a donation button. I don't feel comfortable with that as I really don't need "donations" - I'm paid quite well. Donating users may feel entitled to support etc. (see the second to last bullet) Add ads inside your application. In the web that may be acceptable, but in a desktop program it looks incredibly lame. Charge a small amount for each download. This model works well in the mobile world, but I suspect no one will go for it on the desktop. It doesn't mix well with open source, though I suppose I could charge only for the binaries (most users won't go to the hassle of compiling the sources). People may expect support etc. after having explicitly paid (see next bullet). Make money off a service / community / support associated with the program. This is one route I definitely don't want to take, I don't want any sort of hassle beyond coding. I assure you, the program is top notch (albeit simple) and I'm not aware of any bugs as of yet (there are support forums and blog comments where users may report them). It is also very simple, documented, and discoverable so I do think I have a case for supplying it "as is". Add affiliate suggestions to your installer. If you use a monetization company, you lose control over what they propose. Unless you can establish some sort of strong trust with the company to supply quality suggestions (I sincerely doubt it), I can't have that. Choosing your own affiliate (e.g. directly suggesting Google Toolbar) is possibly the only viable solution to my mind. Problem is, where do I find a solid affiliate that could actually give value to the user rather than infect his computer with crapware? I thought maybe Babylon (not the toolbar of course, I hate toolbars)?

    Read the article

  • Express your personality and potential @ Oracle

    - by jessica.ebbelaar(at)oracle.com
    Ciao, my name is Michel and I am a 24 year old guy from Forlì, Italy, working as a Business Intelligence Business Development Consultant in Rome. After I completed the Bachelor's Degree in Business Administration at Bologna University, I took a Multiple Master of Science in International Management organized by three European Universities: Bologna University (IT), ICN Business School of Nancy (FR) and Uppsala University (SE).I therefore had the chance to travel a lot and, most important, to study and meet hundreds of people from all over the world. This experience enhanced the passion I foster for international environments, different cultures and countries; not to mention the learning of foreign languages. Working for such a structured multinational as Oracle totally reflects my desire to be surrounded by a multicultural and international atmosphere, having the opportunity to grow from the personal point of view and to endlessly boost my career path. Demand Generation My department is responsible for demand generation activities. That implies, for instance, the implementation of various strategies aimed to feed the pipeline for Business Intelligence products in the Italian market. Organization of marketing campaigns, events, providing ideas or contacts to the sales force is just a few examples of our work. I like to define the role of the business development as something that translates the marketing insights into tools to increase the sales, accounting the differences amongst countries, companies and industries. Furthermore, it is an important feature to collaborate with the EMEA team to share knowledge and best practices. My initial lack of an IT background has been constantly covered by the managers and my personal mentor. The thing I appreciated most is indeed the fact I always feel to be a growing potential, becoming essential day after day. I am surprised by the trust and confidence people have on me and how they proudly encourage my personal initiative and always spur me to contribute. Career Ambitions If your ambitions are to work within an international but extremely people focused environment, to contribute to the growth of one of the most successful companies in the world, to deal with a fast-paced industry and highly competitive market, to have the chance to fully express your personality and potential and to satisfy your career ambitions over the years, then Oracle is right for YOU. Looking forward to having YOU aboard! Do you want to find out more about the open roles within Oracle? Follow us on http://campus.oracle.com.

    Read the article

  • Mixing Forms and Token Authentication in a single ASP.NET Application

    - by Your DisplayName here!
    I recently had the task to find out how to mix ASP.NET Forms Authentication with WIF’s WS-Federation. The FormsAuth app did already exist, and a new sub-directory of this application should use ADFS for authentication. Minimum changes to the existing application code would be a plus ;) Since the application is using ASP.NET MVC this was quite easy to accomplish – WebForms would be a little harder, but still doable. I will discuss the MVC solution here. To solve this problem, I made the following changes to the standard MVC internet application template: Added WIF’s WSFederationAuthenticationModule and SessionAuthenticationModule to the modules section. Add a WIF configuration section to configure the trust with ADFS. Added a new authorization attribute. This attribute will go on controller that demand ADFS (or STS in general) authentication. The attribute logic is quite simple – it checks for authenticated users – and additionally that the authentication type is set to Federation. If that’s the case all is good, if not, the redirect to the STS will be triggered. public class RequireTokenAuthenticationAttribute : AuthorizeAttribute {     protected override bool AuthorizeCore(HttpContextBase httpContext)     {         if (httpContext.User.Identity.IsAuthenticated &&             httpContext.User.Identity.AuthenticationType.Equals( WIF.AuthenticationTypes.Federation, StringComparison.OrdinalIgnoreCase))         {             return true;         }                     return false;     }     protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)     {                    // do the redirect to the STS         var message = FederatedAuthentication.WSFederationAuthenticationModule.CreateSignInRequest( "passive", filterContext.HttpContext.Request.RawUrl, false);         filterContext.Result = new RedirectResult(message.RequestUrl);     } } That’s it ;) If you want to know why this works (and a possible gotcha) – read my next post.

    Read the article

  • How Big Data and Social Won the Election

    - by Mike Stiles
    The story of big data’s influence on the outcome of the US Presidential election is worth a good look, because a) it’s a harbinger of things to come, and b) it’s an example of similar successes available to any enterprise seriously resourcing integrated big data, modeling, and data-driven execution on all assets, including social. Obama campaign manager Jim Messina fielded a data and analytics brain trust 5 times larger than 2008. At that time, there were numerous databases from various sources, few of them talking to each other. This time, the mission was to be metrics-centered and measure everything measurable, and in context with all the other data. Big data showed them exactly what they needed to know and told them what to do about it. It showed them women 40-49 on the west coast would donate big money if they got to eat with George Clooney. Women on the east coast would pony up to hang out with Sarah Jessica Parker. Extensive daily modeling showed them what kinds of email appeals, from who, and to whom, would prove most successful in raising cash, recruiting volunteers, and getting out the vote. Swing state voters were profiled and approached with more customized targeting that at any time in history. Ads were purchased on specific shows watched by the targets, increasing efficiency 14% over traditional media buys. For all the criticism of the candidate’s focus on appearing on comedy and entertainment shows, and local radio morning shows, that’s where the data sent them to reach the voters most likely to turn out for them. And then there was social. Again, more than in any other election, Facebook was used for virtual, highly efficient door-to-door canvasing. Facebook fans got pictures of friends in swing states and were asked to encourage them to act. Using that approach, 1 in 5 peer-to-peer appeals led to the desired action. Assumptions, gut, intuition, campaign experience, all took a backseat to strategy shifts solidly backed up by data. Zeroing in on demographics likely to back the President and tracking their mood daily literally changed the voter landscape. The Romney team watched Obama voters appear seemingly out of thin air. One Obama campaign aide said, “We ran the election 66,000 times every night.” Which brings us to your organization. If you’re starting to feel like the battle-cry of “but this is the way we’ve always done it” is starting to put you in an extremely vulnerable position, you’re right. Social has become a key communication tool of the 21st century. Failing to use it, or failing to invest in a deep understanding of who your customers and prospects are so the content you post there will achieve desired actions and results, will leave you waking up one morning wondering, “What happened?”@mikestilesPhoto stock.xchng

    Read the article

  • /etc/postfix/transport missing; what should it look like?

    - by Thufir
    I'm following the mailman guide but couldn't locate /etc/postfix/ so created it as follows: root@dur:~# root@dur:~# cat /etc/postfix/transport dur.bounceme.net mailman: root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo fqdn_test 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 02:05:15 dur postfix/smtpd[20326]: connect from localhost[127.0.0.1] Aug 28 02:06:10 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases is unavailable. open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 02:06:10 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases lookup error for "thufir@localhost" Aug 28 02:06:10 dur postfix/smtpd[20326]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<fqdn_test> Aug 28 02:06:23 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases is unavailable. open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 02:06:23 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases lookup error for "[email protected]" Aug 28 02:06:23 dur postfix/smtpd[20326]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<fqdn_test> Aug 28 02:06:28 dur postfix/smtpd[20326]: disconnect from localhost[127.0.0.1] Aug 28 02:06:49 dur dovecot: pop3-login: Login: user=<thufir>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=20338, TLS Aug 28 02:06:49 dur dovecot: pop3(thufir): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0 root@dur:~# The manual page is here.

    Read the article

  • What are some reasonable stylistic limits on type inference?

    - by Jon Purdy
    C++0x adds pretty darn comprehensive type inference support. I'm sorely tempted to use it everywhere possible to avoid undue repetition, but I'm wondering if removing explicit type information all over the place is such a good idea. Consider this rather contrived example: Foo.h: #include <set> class Foo { private: static std::set<Foo*> instances; public: Foo(); ~Foo(); // What does it return? Who cares! Just forward it! static decltype(instances.begin()) begin() { return instances.begin(); } static decltype(instances.end()) end() { return instances.end(); } }; Foo.cpp: #include <Foo.h> #include <Bar.h> // The type need only be specified in one location! // But I do have to open the header to find out what it actually is. decltype(Foo::instances) Foo::instances; Foo() { // What is the type of x? auto x = Bar::get_something(); // What does do_something() return? auto y = x.do_something(*this); // Well, it's convertible to bool somehow... if (!y) throw "a constant, old school"; instances.insert(this); } ~Foo() { instances.erase(this); } Would you say this is reasonable, or is it completely ridiculous? After all, especially if you're used to developing in a dynamic language, you don't really need to care all that much about the types of things, and can trust that the compiler will catch any egregious abuses of the type system. But for those of you that rely on editor support for method signatures, you're out of luck, so using this style in a library interface is probably really bad practice. I find that writing things with all possible types implicit actually makes my code a lot easier for me to follow, because it removes nearly all of the usual clutter of C++. Your mileage may, of course, vary, and that's what I'm interested in hearing about. What are the specific advantages and disadvantages to radical use of type inference?

    Read the article

  • In a multidisciplicary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • Phone number in meta description bad or good for local rankings NAP

    - by bybe
    Once again I'm at it with increasing people's local rankings and I've learnt so much about local rankings in the past 2 weeks it feels like my brain is gonna pop anyway, question is fairly simple for someone who engages in local rankings and I appreciate the question may be a little guess work but isn't SEO mostly guessing anyway? From what I've read and learned that Google works of a system called nap for local rankings (With many other factors but this question is purely based on NAP). For people who care about local rankings NAP stands for Name of Business / Address of Business / Phone Number for Business. Now what what I've read you don't need the whole NAP to be on one website, a P or just a N can help towards your local rankings. It's believed that NAP rewards more than just P and N for example but knowing Google they might have a diversity checker which is my concern what your get to in a moment. Now of course sites weight differently where your business is posted, it's certainly going to be more credible if your NAP details are on your national phone book than say a blog site, so taking in this consideration too. Pure Guess (Not apart of the question but none the less makes a good read on my belief). Now my guess work would make me believe that the formula would look something like (N)+(A)+(P)x(T) So (N)name would be 1 or 0 to indicate present or not So (A)dress would be 1 or 0 to indicate present or not So (P)hone would be 1 or 0 to indicate present or not So (T)rust would be 1-100 to indicate level of trust So a phone number appearing on youtube might look something like 0+0+1x95= 95 and a NAP appearing on your national phone book might look something like 1+1+1x100= 300 Please note that I'm not saying this is the sole factor and I'm sure its way more complex that this with things like other factors on the page, off the page (Reviews, Links, Clicks) and so on but its still a contributor). The Question My question is fairly simple and I'd imagine hard to impossible to have an actual definite answer to this but maybe someone has seen official wording else where on this, is it bad to include address or phone number in the Meta Description? The reason I ask is that one of my competitors has these elements in the meta descriptions and their local rankings are absolutely superb, the problem I have with this is scrap bot sites like 'Similar Too' 'Seo Rankings' and 1,000's of the other scrap box networks that scrap site and then make urls with your site information are mostly limited to your meta description what this means that your phone number, address and sometimes even your company name if the domain is exact will appear as AP, and even NAP on thousands of websites. So, is it a bad strategy to include phone number and address in meta description, everything I read into would suggest its good of course with the downside of maybe lowering quality of description for click thoughts but top rankings would increase this 10 folds anyhow..

    Read the article

  • Learn programming backwards, or "so I failed the FizzBuzz test. Now what?"

    - by moraleida
    A Little Background I'm 28 today, and I've never had any formal training in software development, but I do have two higher education degrees equivalent to a B.A in Public Relations and an Executive MBA focused on Project Management. I've worked on those fields for about 6 years total an then, 2,5 years ago I quit/lost my job and decided to shift directions. After a month thinking things through I decided to start freelancing developing small websites in WordPress. I self-learned my way into it and today I can say I run a humble but successful career developing themes and plugins from scratch for my clients - mostly agencies outsourcing some of their dev work for medium/large websites. But sometimes I just feel that not having studied enough math, or not having a formal understanding of things really holds me behind when I have to compete or work with more experienced developers. I'm constantly looking for ways to learn more but I seem to lack the basics. Unfortunately, spending 4 more years in Computer Science is not an option right now, so I'm trying to learn all I can from books and online resources. This method is never going to have NASA employ me but I really don't care right now. My goal is to first pass the bar and to be able to call myself a real programmer. I'm currently spending my spare time studying Java For Programmers (to get a hold on a language everyone says is difficult/demanding), reading excerpts of Code Complete (to get hold of best practices) and also Code: The Hidden Language of Computer Hardware and Software (to grasp the inner workings of computers). TL;DR So, my current situation is this: I'm basically capable of writing any complete system in PHP (with the help of Google and a few books), integrating Ajax, SQL and whatnot, and maybe a little slower than an experienced dev would expect due to all the research involved. But I was stranded yesterday trying to figure out (not Google) a solution for the FizzBuzz test because I didn't have the if($n1 % $n2 == 0) method modulus operator memorized. What would you suggest as a good way to solve this dilemma? What subjects/books should I study that would get me solving problems faster and maybe more "in a programmers way"? EDIT - Seems that there was some confusion about what did I not know to solve FizzBuzz. Maybe I didn't express myself right: I knew the steps needed to solve the problem. What I didn't memorize was the modulus operator. The problem was in transposing basic math to the program, not in knowing basic math. I took the test for fun, after reading about it on Coding Horror. I just decided it was a good base-comparison line between me and formally-trained devs. I just used this as an example of how not having dealt with math in a computer environment before makes me lose time looking up basic things like modulus operators to be able to solve simple problems.

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • Started wrong with a project. Should I start over?

    - by solidsnake
    I'm a beginner web developer (one year of experience). A couple of weeks after graduating, I got offered a job to build a web application for a company whose owner is not much of a tech guy. He recruited me to avoid theft of his idea, the high cost of development charged by a service company, and to have someone young he can trust onboard to maintain the project for the long run (I came to these conclusions by myself long after being hired). Cocky as I was back then, with a diploma in computer science, I accepted the offer thinking I can build anything. I was calling the shots. After some research I settled on PHP, and started with plain PHP, no objects, just ugly procedural code. Two months later, everything was getting messy, and it was hard to make any progress. The web application is huge. So I decided to check out an MVC framework that would make my life easier. That's where I stumbled upon the cool kid in the PHP community: Laravel. I loved it, it was easy to learn, and I started coding right away. My code looked cleaner, more organized. It looked very good. But again the web application was huge. The company was pressuring me to deliver the first version, which they wanted to deploy, obviously, and start seeking customers. Because Laravel was fun to work with, it made me remember why I chose this industry in the first place - something I forgot while stuck in the shitty education system. So I started working on small projects at night, reading about methodologies and best practice. I revisited OOP, moved on to object-oriented design and analysis, and read Uncle Bob's book Clean Code. This helped me realize that I really knew nothing. I did not know how to build software THE RIGHT WAY. But at this point it was too late, and now I'm almost done. My code is not clean at all, just spaghetti code, a real pain to fix a bug, all the logic is in the controllers, and there is little object oriented design. I'm having this persistent thought that I have to rewrite the whole project. However, I can't do it... They keep asking when is it going to be all done. I can not imagine this code deployed on a server. Plus I still know nothing about code efficiency and the web application's performance. On one hand, the company is waiting for the product and can not wait anymore. On the other hand I can't see myself going any further with the actual code. I could finish up, wrap it up and deploy, but god only knows what might happen when people start using it. What do you think I should do?

    Read the article

  • How to handle this unfortunately non hypothetical situation with end-users?

    - by User Smith
    I work in a medium sized company but with a very small IT force. Last year (2011), I wrote an application that is very popular with a large group of end-users. We hit a deadline at the end of last year and some functionality (I will call funcA from now on) was not added into the application that was wanted at the very end. So, this application has been running in live/production since the end of 2011, I might add without issue. Yesterday, a whole group of end-users started complaining that funcA that was never in the application is no longer working. Our priority at this company is that if an application is broken it must be fixed first prior to prioritized projects. I have compared code and queries and there is no difference since 2011, which is proofA. I then was able to get one of the end-users to admit that it never worked proofB, but since then that end-user has went back and said that it was working previously......I believe the horde of end-users has assimilated her. I have also reviewed my notes for this project which has requirements and daily updates regarding the project which specifically states, "funcA not achieved due to time constraints", proofC. I have spoken with many of them and I can see where they could be confused as they are very far from a programming background, but I also know they are intelligent enough to act in a group in order to bypass project prioritization orders in order to get functionality that they want to make their job easier. The worst part is is that now group think is setting in and my boss and the head of IT is actually starting to believe them, even though there is no code or query changes. As far as reviewing the state of the logic it is very cut and dry to the point of if 1 = 1, funcA will not work. So, this is the end of the description of my scenario, but I am trying not to get severally dinged on my performance metrics due to this which would essentially have me moved to fixing a production problem that doesn't exist that will probably take over 1 month. I am looking for direct answers to this question. This question is not for rants, polling, or discussions as this is not the format for StackExchange. Please don't downvote me too terribly it is pretty common on this specific site of stack, I am looking for honest answers to this situation and I couldn't find a forum more appropriate.

    Read the article

  • How to move a website and domain name without experiencing downtime for emails or site?

    - by user4842
    Okay, I have a pretty complex problem, so I'll get right to it. I'm a designer who built a new website for my client. Their old site is hosted at GoDaddy, as well as their email. Problem is, the guy who built the original site decided to put the original domain name and hosting under HIS personal GoDaddy account. Well, that turned out to be a bad move for several reasons. Here's how it's all tied together. The original domain name, www.domainoriginal.com, was actually purchased at Network Solutions. The original web designer pointed the nameservers from Network Solutions to his GoDaddy account, where the email and hosting is setup. The new domain name, www.domainnew.com, was purchased under a new and separate GoDaddy account belonging to the company, and the new website was built under a 3rd party platform (Big Commerce). So, the www.domainnew.com is already pointed to the new website using A records at new GoDaddy account. All is fine there. However, they still need www.domainoriginal.com to point to the NEW website as well. (The old one can simply be deleted, it is NOT important). AND, they want to keep their old email addresses intact and working as well, but under the NEW GoDaddy account. Obviously, I have no DNS control at Network Solutions, and I have no idea what kind of control I have at GoDaddy under the old account because the web designer will not let me see inside his account. But, he and GoDaddy both tell me nothing can be done other than to repoint the nameservers to Network Solutions, and then repoint the A record to my new website, www.domainnew.com, and point the MX Records to GoDaddy. I'm told the downtime would be 24-48 hours if I do this. Ideally, we'd like to do a domain name transfer and get www.domainoriginal.com in the new GoDaddy account created by the company. But, I'm told this could take up to 7 days. Does this mean the site and email will be down for 7 days? And any emails sent during this time, would they be lost forever? If I do this, how long could I expect the site and email to go down? And, will the emails be permanently lost? I've gotten different answers from everybody at GoDaddy so I kind of don't trust them anymore... Any help would be greatly appreciated Thanks, Tyson

    Read the article

  • Is it okay to have people with multiple roles in a Scrum team?

    - by Wayne M
    I'm evaluating some Agile-style methodologies for possible introduction to my team. With Scrum, is it allowable to have the same person perform multiple roles? We have a small team of four developers and a web designer; we don't really have a lead (I fulfill this role), QA testers or business analysts, and all of our development tasks come from the CIO. Automated testing is seen as a total waste of time, and everything focuses on speed and not quality. What will happen is the CIO will come up with a development task (whether a feature or a bug) and give it to a developer (not to the whole team, to an individual, often in private or out of the blue) who is then expected to get it completed. The CIO doesn't gather requirements beyond the initial idea (and this has bitten us before as we'll implement something only to find out that none of the end users can use the feature, because they weren't consulted or even informed about it before we developed it, and in a panic we'll be told to revert the change) but requires say in/approval of everything that we do. First things first, is a Scrum style something to consider to introduce some standards and practices? From reading, Scrum seems to rely on a bit more trust and communication and focuses more on project management than on development, which is something we are completely devoid of as we don't have any semblance of project management at present. Second, if it can work is it unreasonable for someone, let's say myself, to act as both ScrumMaster and a developer? Or for a developer to also be the Product Owner (although chances are this will be the CIO, who isn't a developer)? I realize the Scrum Master and the Product Owner should be different people but at the same time I don't think we have anyone who has the qualities of a Product Owner (chances are it would turn into a "I need all these stories, I don't care how but get it done" type of deal and/or any freeze would be unfrozen on a whim). It seems to me that I might need to pick and choose pieces of Scrum/XP/Lean to compensate for how things are done currently, as it's highly unlikely that the mentality can be changed; for instance Pair Programming would never fly (seen as a waste, you get half the tasks done if you need two people for everything), TDD would be a hard sell, but short cycles would be welcomed.

    Read the article

  • Are we ready for the Cloud computing era?

    - by andrewkatumba
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} "Elite?" developer circles are abuzz with the notion of Cloud computing . The increasing bandwidth, the desire for faster and leaner operations and ofcourse the need for outsourcing non core it related business requirements e.g wordprocessing, spreadsheets, data backups. In strolls Chrome OS (am sure other similar OSes will join with their own wagons for us to jump on), offering just that, internet based services(more like a repository of), quick efficient and "reliable" and for the most part cheap and often time even free! And we all go rhapsodic!  It boils down to the age old dilemma, "if the cops are so busy protecting us then who will protect them" (even the folks back at Hollywood keep us reminded)! Who is going to ensure that these internet based services do not go down(either intentionally or by some malicious third party) leading to a multinational colossal disaster .At the risk of sounding pessimistic,  IT IS NOT AN ISSUE OF TRUST, this is but a mere case of Murphy's Law!  What then? Should the "cloud" be trusted to this extent at this time?  This is an era where challenges are rapidly solved with lightning promptness to "beat the competition", my hope is that our solutions are not just creating problems that we may not be able to solve!  Keeping my ear on the Ground.

    Read the article

  • Am I deluding myself? Business analyst transition to programmer

    - by Ryan
    Current job: Working as the lead business analyst for a Big 4 firm, leading a team of developers and testers working on a large scale re-platforming project (4 onshore dev, 4 offshore devs, several onshore/offshore testers). Also work in a similar capacity on other smaller scale projects. Extent of my role: Gathering/writing out requirements, creating functional specifications, designing the UI (basically mapping out all front-end aspects of the system), working closely with devs to communicate/clarify requirements and come up with solutions when we hit roadblocks, writing test cases (and doing much of the testing), working with senior management and key stakeholders, managing beta testers, creating user guides and leading training sessions, providing key technical support. I also write quite a few macros in Excel using VBA (several of my macros are now used across the entire firm, so there are maybe around 1000 people using them) and use SQL on a daily basis, both on the SQL compact files the program relies on, our SQL Server data and any Access databases I create. The developers feel that I am quite good in this role because I understand a lot about programming, inherent system limitations, structure of the databases, etc so it's easier for me to communicate ideas and come up with suggestions when we face problems. What really interests me is developing software. I do a fair amount of programming in VBA and have been wanting to learn C# for awhile (the dev team uses C# - I review code occasionally for my own sake but have not had any practical experience using it). I'm interested in not just the business process but also the technical side of things, so the traditional BA role doesn't really whet my appetite for the kind of stuff I want to do. Right now I have a few small projects that managers have given me and I'm finding new ways to do them (like building custom Access applications), so there's a bit here and there to keep me interested. My question is this: what I would like to do is create custom Excel or Access applications for small businesses as a freelance business (working as a one-man shop; maybe having an occasional contractor depending on a project's complexity). This would obviously start out as a part-time venture while I have a day job, but eventually become a full-time job. Am I deluding myself to thinking I can go from BA/part-time VBA programmer to making a full-time go of a freelance business (where I would be starting out just writing custom Excel/Access apps in VBA)? Or is this type of thing not usually attempted until someone gains years of full-time programming experience? And is there even a market for these types of applications amongst small businesses (and maybe medium-sized) businesses?

    Read the article

  • WebCenter Customer Spotlight: Regency Centers Corporation

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryRegency Centers Corporation, based in Jacksonville, FL, is a leading national owner, operator, and developer of grocery-anchored and community shopping centers. Regency grew rapidly over much of the last decade. To keep up with the monthly and yearly administrative processes required to manage thousands of tenants, including reconciling yearly pass-through expenses, the customer upgraded to Oracle’s JD Edwards EnterpriseOne Version 9.0 and deployed Oracle WebCenter Imaging, Process Management and Oracle BI Publisher, to streamline invoice processing and reporting. Using Oracle WebCenter Imaging - Regency accelerated and improved vendor invoice accuracy  which increases process integrity by identifying potential duplicate bills while enabling rapid approval of electronic invoice documents. Company Overview Regency Centers Corporation, based in Jacksonville, FL,  is a leading national owner, operator, and developer of grocery-anchored and community shopping centers. The company owns 367 centers, totaling nearly 50 million square feet, located in top markets throughout the United States. Founded in 1963 and operating as a fully integrated real estate company, Regency is a qualified real estate investment trust that is self-administered and self-managed, operating from 17 regional offices around the country.  Business Challenges Ensure continued support of vital business applications that drive the real estate developer’s key business processes, including property management and tenant payment processing Streamline year-end expense recognition and calculation, enabling faster tenant billing Move to a Web-based platform to deliver greater mobility and convenience to employees Minimize system customizations to reduce IT management costs and burden moving forward Solution DeployedRecency Centers Corporation worked with the  Oracle Partner ICS to upgrade to Oracle’s JD Edwards EnterpriseOne Version 9.0, migrating to a more user-friendly, Web-based platform and realizing numerous new efficiencies in property management and tenant payment processing. They accelerated and improved vendor invoice accuracy with Oracle WebCenter Imaging, which increases process integrity by identifying potential duplicate bills while enabling rapid approval of electronic invoice documents. Business Results Enabled faster and more accurate tenant billing for year-end expenses, accelerating collections of millions of dollars in revenue Gained full audit and drill-down capabilities that facilitate understanding various aspects of calculations for expense participation generation Increases process integrity by identifying potential duplicate bills while enabling rapid approval of electronic invoice documents Helped to ensure on-time payments to hundreds of vendors, including contractors and utilities "We have realized numerous efficiencies with Oracle’s JD Edwards EnterpriseOne 9.0, particularly around tenant billings. It accelerates our year-end expense reconciliation process and enables us to create and process billings more quickly.” James Chiang, Vice President of Real Estate Accounting Regency Centers Corporation Additional Information Regency Centers Corporation Customer Snapshot Oracle WebCenter Imaging JD Edwards EnterpriseOne Financials 9.0 JD Edwards EnterpriseOne Project Costing JD Edwards EnterpiseOne Real Estate Management Oracle Business Intelligence Publisher Oracle Essbase

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >