Search Results

Search found 2130 results on 86 pages for 'serve u'.

Page 57/86 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Commerce, Anyway You Want It

    - by David Dorf
    I believe our industry is finally starting to realize the importance of letting consumers determine how, when, and where to interact with retailers.  Over the last few months I've seen several articles discussing the importance of removing the barriers between existing channels. Paula Rosenblum of RSR first brought the term omni-channel to my attention back in September. She stated, "omni-channel retail isn’t the merging of channels – rather, it’s the use of all possible channels (present and future) to enhance the customer experience in a profitable way." I added to her thoughts in this blog posting in which I said, "For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways." More recently Brian Walker of Gartner suggested we stop using the term multi-channel and begin thinking more about consumer touch-points. "It is time for organizations to leave their channel-oriented ways behind, and enter the era of agile commerce--optimizing their people, processes and technology to serve today's empowered, ever-connected customers across this rapidly evolving set of customer touch points." Now Jason Goldberg, better known as RetailGeek, says we should start breaking down the channel silos by re-casting the VP of E-Commerce as the VP of Digital Marketing, and change his/her focus to driving sales across all channels using digital media. This logic is based on the fact that consumers switch between channels, or touch-points as Brian prefers, as part of their larger buying process. Today's smart consumer leverages the Web, mobile, and stores to provide the best shopping experience, so retailers need to make this easier. Regardless of what we call it, the key take-away is that "multi-channel" is not only an antiquated term but also an idea who's time has passed.  Today, retailers must look at e-commerce, m-commerce, f-commerce, catalogs, and traditional store sales collectively and through the consumers' eyes.  The goal is not to drive sales through each channel but rather to just drive sales -- using whatever method the customer prefers.  There really should be just one cart.

    Read the article

  • CMS vs Admin Panel?

    - by Bob
    Okay, so this probably seems like an unusual, more grammar related question, but I was unsure of what to call it. If you use a software such as vBulletin or MyBB or even Blogger and you're the administrator (or other, lesser position such as moderator) of the forum, or publisher/author of the blog, you generally have access to something of an "admin panel". For example, vBulletin's admin panel looks like this and Blogger's admin panel looks something like this. While they both look different and do different things, the goal is fundamentally the same: to provide the user with a method for adding, modifying, or deleting content... to let them control and administrate their forum or blog. Also, they're both made specifically by the company for use in a specific product. Now, there's also options like Drupal. It seems to offer quite a bit more and be quite a bit more generalized. How does something like this work? If you were freelancing, would you deploy a website with Drupal, or would it be something the client might already have installed on their own server? I've never really used Drupal, only heard about it, so please let me know. Also, there seems to be other options like cPanel, a sort of global CMS that allows you to administrate over your entire website. How do those work in comparison to Drupal, or the administrative panels with vBulletin? They seem to serve related, but different purposes. Basically, what is the norm? If I'm developing a web application for a group that needs to be able to edit their website without the need to go into the code or the database (or rather, wants to act in a graphical, easy-to-use content-management/admin panel), would it also be necessary to write my own miniature admin panel? Or would I be able to send them off knowing that they have cPanel? Or could something like Drupal fill this void? Again, I'm a little new to web development, and I'm working on planning out my first, real, large website. So I need a little advice on the standards and expectations for web development - security and coding practices aside, what should I be looking for as far as usability and administration for the client (who will be running the site once I'm done creating the website)? Any extra tips would also be appreciated! Oh, and just a little bit: I'm writing the website using Ruby on the Sinatra framework (both Ruby and Sinatra are things I'm fairly comfortable with) and I'm not being paid to make the website (and I will also be a user, and one of the three administrators of the website) - it's being built for a club I'm in.

    Read the article

  • How do I prevent having to log in on 3 separate prompts every time I start my machine?

    - by JC
    Ubuntu 11.04 Natty Narwhal, Ubuntu Classic desktop Each time I start my machine, I have to log in 3 times. I spent a week in IRCFreenode#ubuntu and got nothing but condescension. I've searched on the official Ubuntu fora for similar problems, tried every recommendation, and still get 3 login screens. As a workaround, I have reset login such that I get a login screen at startup, which I'd prefer not to get since this machine is accessible by no one but me, physically. I have gone into System Preferences Passwords and Encryption Keys, set first 'Passwords: default' to 'Default' and unlocked it, and unlocked the 'Passwords: login' key, too. Next, since that changed nothing, I set 'Passwords: login' to 'Default', and checked to make sure it was still unlocked. Again, no change, still get 3 login prompts at startup. I've checked twice to insure that I am the owner of the files; I am. At the suggestion of several people in #ubuntu, I've deleted first one, then the other password key in 'Passwords and Encryption Keys'. Still get 3 login prompts. I changed from the Unity desktop to Ubuntu Classic. While that didn't fix the above problem, it is a much more elegant desktop than Unity, and I'll keep it. From what I've read, this seems to be a Seahorse issue, but beyond that, no one seems to have a solution that works. I'm lost. This shouldn't be this difficult or annoying. I'm trying to help our local Old Time music collective get their machines switched over to Ubuntu in order to save them some money which they can use to promote their DRM-free music. But from what I've seen of Ubuntu so far on my own machine, I can't really recommend that they make this switch. I hope to be proved wrong on that point. But as it stands, if I was out of town or out of country and they ran into a problem, they'd have no way of fixing it as they're all less experienced than even I am. I'm not trying to cast aspersions on Ubuntu or Linux, but it seems pretty clear that KNOWLEDGEABLE, HELPFUL support for Ubuntu is lacking barring any desire on the problem-experiencing-user's part to avoid condescension. Having worked with, and run, several non-profits over the past 20 years, I know that getting volunteers to act professionally can be like herding cats. But an organization's reputation can be denigrated by sarcastic behaviors on the part of those who serve, effectively, as its public face. Thank you all for your help and support. Now...does anyone have a solution to my problem?

    Read the article

  • Apache - create multiple aliases

    - by mc3mcintyre
    I'm trying to setup two websites on my Apache server. One is www.domain.com and the other is test.domain.com. Currently, my 000-default.conf file reads as follows: <VirtualHost www:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.domain.com #ServerAlias www ServerAdmin [email protected] DocumentRoot /var/www/domain.com/ # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/domain.error.log CustomLog ${APACHE_LOG_DIR}/domain.access.log combined UseCanonicalName on allow from all Options +Indexes # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost> <VirtualHost test:80> DocumentRoot "/var/www/domain.com/test/" ServerName test.domain.com ServerAdmin [email protected] ErrorLog ${APACHE_LOG_DIR}/test.domain.error.log CustomLog ${APACHE_LOG_DIR}/test.domain.access.log combined UseCanonicalName on allow from all Options +Indexes </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet As is, when I use a browser to go to the www location, it show me a directory listing. However, if I remove the www:80 on Line 1 and replace it with *:80, it correctly displays the webpage. I don't understand why. Can anyone help me configure this 000-default.conf file so that www goes to "/var/www/domain.com" and that test goes to "/var/www/domain.com/test"? Thank you.

    Read the article

  • Oracle Service Cloud May 2014 Release – Focus on your driving by JP Saunders

    - by Tuula Fai
    The next time you’re twiddling dials on your car’s dashboard to get the air to blow in the right direction, and the right song to play on the stereo, while pulling on the wires to charge your phone and punching in passwords to re-sync your hands-free headset to take a call, consider this… Does having a better dashboard UI in your car improve your driving performance? The Tesla car has one of the most modern and intuitive dashboards in any commercial car today. It is actually based on the design of a smart phone, which can download apps and updates directly from the cloud.  The 17” touchscreen, Lynx-based dashboard totally integrates all channels and devices, allowing the driver to focus on the smooth driving and power of this luxury (toy) car.  What the folks at Tesla didn't do was avoid the complexity of our needs. Instead, they streamlined them. And, while we might not all be able to afford a Tesla, their approach demonstrates that a modern UI approach can ultimately make a positive difference in our lives and businesses.  This is why the productivity and effectiveness of a Modern Contact Center is many times greater than that of a traditional contact center. Agents in a Modern Contact Center get to focus on the task at hand, the customer engagement, rather than stumbling their way through Lego blocks of complexity.  The Oracle Service Cloud is a modern approach to customer service that empowers your agents to achieve greater focus on improving your operational and strategic success through streamlined business processes.  Here are some of the recent May 2014 release highlights to the Oracle Service Cloud: Performance Enhanced Desktop UI A modern agent desktop interface that optimizes clumsy tasks, logins, screens and workflows and is optimized for agent and system performance. Improvements include performance for drag-and-drop configurable views, saved searches, and improved caching for high-speed performance even during disconnected or slow internet access.  Customer Experience Routing A streamlined automatic way to connect the right customer need to the best agent skills, based on multidimensional variables such as product skills, language skills, workload, call volume to optimize the connection and resolution experience. On-The-Go Mobile Improvements to the Agent mobile app that extend connectivity to websites, and customer surveys that are mobile-ready and rendered for any device, and ensure the customer’s voice is captured while the insight is still top of mind.  Infused Social Engagement Enhancements to infused social capabilities allow agents to respond in social threads directly from within the agent desktop, with the information becoming part of the incident record for automatic actions (such as replay or escalate) triggered off the response. Front-End Siebel Contact Center The market leading online Web Customer Self-Service interface from the Oracle Service Cloud, is now out-of-the-box ready for Oracle Siebel customers. Deploy a new online web self-service interface in a matter of weeks to have customers self-serve and self-solve answers, with escalated incidents routed directly into the Oracle Siebel Contact Center. For more information on the latest enhancements for the Oracle Service Cloud, please see the Oracle Service Cloud May 2014 Capabilities and Benefits. Related blogs: Oracle Service Cloud Feb 2014

    Read the article

  • Implementing the Reactive Manifesto with Azure and AWS

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/31/implementing-the-reactive-manifesto-with-azure-and-aws.aspxMy latest Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS has just been published! I’d planned to do a course on dual-running a messaging-based solution in Azure and AWS for super-high availability and scale, and the Reactive Manifesto encapsulates exactly what I wanted to do. A “reactive” application describes an architecture which is inherently resilient and scalable, being event-driven at the core, and using asynchronous communication between components. In the course, I compare that architecture to a classic n-tier approach, and go on to build out an app which exhibits all the reactive traits: responsive, event-driven, scalable and resilient. I use a suite of technologies which are enablers for all those traits: ASP.NET SignalR for presentation, with server push notifications to the user Messaging in the middle layer for asynchronous communication between presentation and compute Azure Service Bus Queues and Topics AWS Simple Queue Service AWS Simple Notification Service MongoDB at the storage layer for easy HA and scale, with minimal locking under load. Starting with a couple of console apps to demonstrate message sending, I build the solution up over 7 modules, deploying to Azure and AWS and running the app across both clouds concurrently for the whole stack - web servers, messaging infrastructure, message handlers and database servers. I demonstrating failover by killing off bits of infrastructure, and show how a reactive app deployed across two clouds can survive machine failure, data centre failure and even whole cloud failure. The course finishes by configuring auto-scaling in AWS and Azure for the compute and presentation layers, and running a load test with blitz.io. The test pushes masses of load into the app, which is deployed across four data centres in Azure and AWS, and the infrastructure scales up seamlessly to meet the load – the blitz report is pretty impressive: That’s 99.9% success rate for hits to the website, with the potential to serve over 36,000,000 hits per day – all from a few hours’ build time, and a fairly limited set of auto-scale configurations. When the load stops, the infrastructure scales back down again to a minimal set of servers for high availability, so the app doesn’t cost much to host unless it’s getting a lot of traffic. This is my third course for Pluralsight, with Nginx and PHP Fundamentals and Caching in the .NET Stack: Inside-Out released earlier this year. Now that it’s out, I’m starting on the fourth one, which is focused on C#, and should be out by the end of the year.

    Read the article

  • Updating My Online Boggle Solver Using jQuery Templates and WCF

    With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. This model simplifies web page development, but carries with it some costs - namely, the large amount of data exchanged between the client and the server during a postback. On postback the browser sends the values of all of its form fields (including hidden ones, like view state, which may be quite large) to the server; the server then sends back the entire contents of the web page. While there are some scenarios where this amount of information needs to be exchanged, in many cases the user has performed some action that requires far less information to be exchanged. With a little bit of forethought and code we can have the browser and server exchange much less data, which leads to more responsive web pages and an improved user experience. Over the past several weeks I've been writing an article series on accessing server-side data from client script. Rather than rely solely on forms and postbacks, many websites use JavaScript code to asynchronously communicate with the server in response to the page loading or some other user action. The server, upon receiving the JavaScript-initiated request, returns just the data needed by the browser, which the browser then seamlessly integrates into the web page. There are a variety of technologies and techniques that can be employed to provide both the needed server- and client-side functionality. Last week's article, Using WCF Services with jQuery and the ASP.NET Ajax Library, explored using the Windows Communication Foundation, or WCF, to serve data from the web server and showed how to consume such a service using both the ASP.NET Ajax Library and jQuery. In a previous 4Guys article, Creating an Online Boggle Solver, I built an application to find all solutions in a game of Boggle. (Boggle is a word game trademarked by Parker Brothers and Hasbro that involves several players trying to find as many words as they can in a 4x4 grid of letters.) This article takes the lessons learned in Using WCF Services with jQuery and the ASP.NET Ajax Library and uses them to update the user interface for my online Boggle solver, replacing the existing WebForms-based user interface with a more modern and responsive interface. I also used jQuery Templates, a JavaScript-based templating library that is useful for displaying the results from a server-side service. Read on to learn more! Read More >

    Read the article

  • Sweden Azure Group with Michele Laroux Bustamente &amp; Maartin Balliauw Thursday 22nd May

    - by Alan Smith
    Originally posted on: http://geekswithblogs.net/asmith/archive/2014/05/19/156418.aspxSweden Azure Group (SWAG) has the privilege of welcoming Michele Laroux Bustamente and Maartin Balliauw to present sessions at our meeting this Thursday. Michele and Maartin are two of the world’s leading experts in Cloud Computing and Azure, and will be taking time out from their busy schedules to share their ideas with us, and answer any questions. Knowit Stockholm are kindly hosting the event at their offices, and providing food and refreshments. It should be a great evening. You can register for the event here. Azure Q & A - Michele Leroux Bustamante In this interactive Q & A session Michele Leroux Bustamante will be on hand to share her wealth of experience on Azure related issues. If you are new to Azure and wanting some tips to get started, or an experienced developer needing to negotiate the legal and political protocols related to Cloud Computing Michele will have been there, done that, and be willing to share her experiences. This session will be entirely driven by that attendees, so please come prepared with questions. Reducing latency on the web with the Windows Azure CDN – Maarten Balliauw Serving up content on the Internet is something our web sites do daily. But are we doing this in the fastest way possible? How are users in faraway countries experiencing our apps? Why do we have three webservers serving the same content over and over again? In this session, we’ll explore the Windows Azure Content Delivery Network or CDN, a service which makes it easy to serve up blobs, videos and other content from servers close to our users. We’ll explore simple file serving as well as some more advanced, dynamic edge caching scenarios. Michele Leroux Bustamante Michele Leroux Bustamante is CIO at Solliance (solliance.net), cofounder of Snapboard (snapboard.com), and is recognized as a Microsoft Regional Director and MVP. Michele is a thought leader with over 20 years specializing in building scalable and secure end-to-end system design, identity and access management, and cloud computing technologies – for companies of all sizes. In recent years Michele has also helped launch several startup business ventures and has been a mentor to startups in several accelerator programs – providing both technical and business guidance. Michele shares her experiences through presentations and keynotes all over the world, and has been publishing regularly in technology journals. Maarten Balliauw Maarten Balliauw is a Technical Evangelist at JetBrains. His interests are all web: ASP.NET MVC, PHP and Windows Azure. He’s a Microsoft Most Valuable Professional (MVP) for Azure and an ASPInsider. He has published many articles in both PHP and .NET literature such as MSDN magazine and PHP architect. Maarten is a frequent speaker at various national and international events such as MIX (Las Vegas), TechDays, DPC, …

    Read the article

  • eSTEP Newsletter October 2012 now available

    - by uwes
    Dear Partners,We would like to inform you that the October '12 issue of our Newsletter is now available.The issue contains information to the following topics:News from CorpOracle Announces Oracle Solaris 11.1 at Oracle OpenWorld; Oracle Announces Oracle Exadata X3 Database In-Memory Machine; Oracle Enterprise Manager 12c introduces New Tools and Programs for Partners; Oracle Unveils First Industry-Specific Engineered System - the Oracle Networks Applications Platform,;  Oracle Unveils Expanded Oracle Cloud Offerings; Oracle Outlines Plans to Make the Future Java During JavaOne 2012 Strategy Keynote; Some interesting Java Facts and Figures; Oracle Announces MySQL 5.6 Release Candidate Technical Section What's up with LDoms (4 tech articles); Oracle SPARC T4 Systems cut Complexity, cost of Cryptographic Tasks; PeopleSoft Enterprise Financials 9.1; PeopleSoft HCM 9.1 combined online and batch benchmark,; Product Update Bulletin Oracle Solaris Cluster Oct 2012; Sun ZFS Storage 7420; SPARC Product Line Update; SPARC M-series -  New DAT 160 plus EOL of M3000 series; SPARC SuperCluster and SPARC T4 Servers Included in Enterprise Reference Architecture Sizing Tool; Oracle MagazineLearning & EventsRecently delivered Techcasts: An Update after the Oracle Open World, An Update on OVM Server for SPARC; Update to Oracle Database ApplianceReferencesBridgestone Aircraft Tire Reduces Required Disk Capacity by 50% with Virtualized Storage Solution; Fiat Group Automobiles Aligns Operational Decisions with Strategy by Using End-to-End Enterprise Performance Management System; Birkbeck, University of London Develops World-Class Computer Science Facilities While Reducing Costs with Ultrareliable and Scalable Data Infrastructure How toIntroducing Oracle System Assistant; How to Prepare a ZFS Storage Appliance to Serve as a Storage Device; Migrating Oracle Solaris 8 P2V with Oracle Database 10.2 and ASM; White paper on Best Practices for Building a Virtualized SPARC Computing Environment, How to extend the Oracle Solaris Studio IDE with NetBeans Plug-Ins; How I simplified Oracle Database 11g Installation on Oracle Linux 6You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Ubuntu Server and setting up two nic cards

    - by kmalik
    I have ubuntu server on a computer with a wireless and hardwired nic card. The wireless needs to get the internet and pass it to the ubuntu server as well as pass it along to the hardwired nic card to more computers. I am having issues getting the basic set up as I believe the route table is grabbing from the wrong nic card. The router is 192.168.1.0 and the server is set to 192.168.1.11 on the wireless card through DHCP ETH0 (wired nic card) is set up to be 10.10.10.0 and the server is 10.10.10.1) I am not a linux or networking guru but basically I am trying to have internet come from a guest network 192.168.1.0 i believe to give internet to the ubuntu server then the ubuntu server will also A) have the wired nic serve DHCP addresses to other computers via a switch or router (that acts as a switch) via 10.10.10.0 addresses. And I would love if it also passed along internet capabilities as well if possible. Bu really at this point my hope is to at least get the internet working on the server and the DHCP to pass correctly. At the moment the specific issue I am having is getting ubuntu server to connect to the internet and have both nic cards up and running correctly. Any help would be appreciated! The route table is as follows: Destination Gateway GM Flags Metric Iface 0.0.0.0 10.10.10.1 0.0.0.0 UG 100 eth0 10.10.10.0 0.0.0.0 255.255.255.0 U 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 eth0 1992.168.1.0 0.0.0.0 255.255.255.255.0 U 0 eth1 My interfaces is set up as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.10.10.1 netmask 255.255.255.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.1 domain-name-servers 192.168.1.0 auto eth1 iface eth1 inet dhcp netmask 255.255.255.0 gateway 192.168.1.0 wpa-driver wext wpa-ssid "ssid_name" wpa-ap-scan 1 wpa-proto wpa wpa-pairwise ccmp wpa-group ccmp wpa-key-mgmt wpa-psk wpa-psk "HASH" My DHCPD.conf (as there is a domain name server on here is as follows): ddns-update-style none default-lease-time 600 max-lease-time 7200 authoritative option domain-name "Kamron's Network" option subnet-mask 255.255.255.0 option broadcast-address 10.10.10.255 option routers 192.168.1.0 option domain-name-server 192.168.1.0 98.223.128.213 ooption subnet 10.10.10.0 netmask 255.255.255.0 { range 10.10.10.10 10.10.10.99 } log-facility local7

    Read the article

  • OData &ndash; The easiest service I can create

    - by Jon Dalberg
    I wanted to create an OData service with the least amount of code so I fired up Visual Studio and got cracking. I decided to serve up a list of naughty words and make them read-only. Create a new web project. I created an empty MVC 2 application but MVC is not required for OData. Add a new WCF Data Service to the project. I named mine NastyWords.svc since I’m serving up a list of nasty words. Add a class to expose via the service: NastyWord 1: [DataServiceKey("Word")] 2: public class NastyWord 3: { 4: public string Word { get; set; } 5: }   I need to be able to uniquely identify instances of NastyWords for the DataService so I used the DataServiceKey attribute with the “Word” property as the key. I could have added an “ID” property which would have uniquely identified them and would then not need the “DataServiceKey” attribute because the DataService would apply some reflection and heuristics to guess at which property would be the unique identifier. However, the words themselves are unique so adding an “ID” property would be redundantly repetitive. Then I created a data source to expose my NastyWord objects to the service. This is just a simple class with IQueryable<T> properties exposing the entities for my service: 1: public class NastyWordsDataSource 2: { 3: private static IList<NastyWord> words = new List<NastyWord> 4: { 5: new NastyWord{ Word="crap"}, 6: new NastyWord{ Word="darn"}, 7: new NastyWord{ Word="hell"}, 8: new NastyWord{ Word="shucks"} 9: }; 10:   11: public NastyWordsDataSource() 12: { 13: NastyWords = words.AsQueryable(); 14: } 15:   16: public IQueryable<NastyWord> NastyWords { get; private set; } 17: }   Now I can go to the NastyWords.svc class and tell it which data source to use and which entities to expose: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Compile and browse to my NastWords.svc and weep with joy Now I can query my service just like any other OData service. Next time, I’ll modify this service to allow updates to sent so I can build up my list of nasty words. Enjoy!

    Read the article

  • The Use-Case Driven Approach to Change Management

    - by Lauren Clark
    In the third entry of the series on OUM and PMI’s Pulse of the Profession, we took a look at the continued importance of change management and risk management. The topic of change management and OUM’s use-case driven approach has come up in few recent conversations. So I thought I would jot down a few thoughts on how the use-case driven approach aids a project team in managing the project’s scope. The use-case model is one of several tools in OUM that is used to establish and manage the project's scope.  Because a use-case model can be understood by both business and IT project team members, it can serve as a bridge for ongoing collaboration as well as a visual diagram that encapsulates all agreed-upon functionality. This makes it a vital artifact in identifying changes to the project’s scope. Here are some of the primary benefits of using the use-case model as part of the effort for establishing and managing project scope: The use-case model quickly communicates scope in a straightforward manner. All project stakeholders can have a common foundation for the decisions regarding architecture and design and how they relate to the project's objectives. Once agreed upon, the model can be put under change control and any updates to the model can then be quickly identified as potentially affecting the project’s scope.  Changes requested or discovered later in the project can be analyzed objectively for their impact on project's budget, resources and schedule. A modular foundation for the design of the software solution can be established in Elaboration.  This permits work to be divided up effectively and executed in so that the most important and riskiest use-cases can be tackled early in the project. The use-case model helps the team make informed decisions about implementation priorities, which allows effective allocation of limited project resources.  This is very helpful in not only managing scope, but in doing iterative and incremental planning which relies heavily on the ability to identify project priorities. Bottom line is that the use-case model gives the project team solid understanding of scope early in the project.  Combine this understanding with effective project management and communication and you have an effective tool for reducing the risk of overruns in budget and/or time due to out of control scope changes. Now that you’ve had a chance to read these thoughts on the use-case model and project scope, please let me know your feedback based on your experience.

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • BI&EPM in Focus November 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE IBM is Embracing Oracle Exalytics: The Velocity of Thought and Action (link) Customers Ambulance Victoria, Australia, uses analytics and modelling to serve the expanding needs of a growing population (link) Cablemás Selects Oracle to Speed Customer Data Insights (link) National Instruments Introduces New Business Intelligence Solutions—Runs Reports up to 30x Faster, and Expands Customer Insight (link) FLSmidth Ensures Precise, Transparent Financial Reporting at All Business Levels, Reduces Financial Consolidation Time by up to 40% (link) Enterprise Performance Management Partner Edgewater Ranzal Webinar Series Mitigate Your Biggest EPM Project Risk - Thursday, 21st November - Register here:  4.00 GMT Capital Planning in the Energy Industry - Tuesday, 26th November - Register here:  4.00 GMT Driving Value in the Retail Industry Using Hyperion Strategic Finance (HSF)  - Tuesday, 10th December - Register here:  7.00 GMT Dec 11, Look Smarter Selling Hyperion Profitability & Cost Management (HPCM) Webcast (link) EPM System Infrastructure Tips & Tricks Support: November EPM Patch Set Updates released Business Analytics Monthly Index - October 2013 Hyperion Smart View Assistance with OBIEE 11.1.1.7 Hyperion Disclosure Management 11.1.2.3.330 PSU 17444967 [Doc ID 1592645.1] Hyperion Financial Close Management (FCM) 11.1.2.3.100 PSU 16989110 [Doc ID 1592644.1] Business  Intelligence BI-Apps Whitepaper: Packaged Analytic Applications: Accelerating Time and Value By Wayne Eckerson (link) BI Apps Blog: A Closer Look at Oracle Price Analytics (link) Blog: Taking Your Business Scorecard Golfing (link) Blog: Practical Uses of Business Scorecards, from Company-Wide to Process Specific (link) Nov 19, Big Data at Work Series: How Delphi Harnesses Big Data to Improve Warranty Response & Customer Satisfaction (link) Rittman Mead Blog: Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration Support: OBIEE Suite Bundle Patches (understand OBIEE naming convention) [Doc ID 1591422.1] Support Blog: Java update alert: Essbase Administration Services (EAS) 11.1.2.3 (link) Support Blog: OBIEE 11.1.1.7.131017 now available (link) /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Architecture for a template-building, WYSIWIG application

    - by Sam Selikoff
    I'm building a WYSIWYG designer in Ember.js. The designer will allow users to create campaigns - think MailChimp. To build a campaign, users will choose an existing template. The template will have a defined layout. The user will then be taken to the designer, where he will be able to edit the text and style, and additionally change some layout options. I've been thinking about how best to go about structuring this app, and there are a few hurdles. Specifically, the output of the campaign will be dynamic: eventually, it will be published somewhere, and when the consumers (not my users, but the people clicking on the campaign that my user created) visit the campaign, certain pieces of data will change, depending on the type of consumer viewing the campaign. That means the ultimate output of the designer will be a dynamic site. The data that is dynamic for this site - the end product - will not be manipulated by the user in the designer. However, the data that will be manipulated by the user in the designer are things like copy, styles, layout options, etc. I'll call the first set of variables server-side data, and the second client-side data. It seems, then, that the process will go something like this: I'll need to create templates for this designer that have two dynamic segments. For instance, the server-side data could be Liquid expressions, and the client-side data Handlebars expressions. When the user creates a campaign, I would compile the template on the back end using some dummy data for the server-side variables, and serve up a handlebars template to the Ember app. The user would then edit the template, and the Ember app would save all his edits to the JS variables that were powering the template. This way he'd be able to preview the template. When he saves, he'll send back the selected template, along with all the data and options he's made. When it comes time to publish, the back-end system will have to do two things: compile the template with Handlebars using the campaign data, and then compile the template with Liquid using the server-side data Is my thinking roughly accurate about this, or is there a simpler way?

    Read the article

  • Goodbye, Spreadsheets and Hello Modern ERP

    - by Christine Randle
    By: Steve Cox, Vice President, Oracle Accelerate for Midsize Companies     Signs of the resurging economy continue to sprout, with green shoots rising across different sectors and industries. With the economy on the rebound, businesses are increasing their investment in technology to keep up with growth and evolving demands; as proof, Gartner recently increased its worldwide IT spending forecast for 2012 to $3.6 trillion, anticipating a 3 percent increase from 2011 spending.   One of the segments most reliant on technology to catapult growth is midsize companies – established businesses leveraging every competitive efficiency and advantage to compete with much larger enterprises. We find that to compete against the big guys, they need to create an internal technology infrastructure to fuel that growth. Goodbye, spreadsheets and hello modern ERP.   While many businesses postponed upgrading or replacing financial and HR management systems during the recession, now some have started dusting off RFPs and revisiting technology options. Years ago, midsize organizations used spreadsheet-based systems and processes to manage employees, customers, partners, products and revenue. We’ve found that as companies scale up, they are apt to avoid heavily customizing their existing systems, and instead are more prone to standardize on a modern, enterprise-class ERP system.   Modern ERP platforms enable growing companies to immediately address the most pressing challenges – accounting, talent management, customer retention, et. al. Midsize companies implement these systems and processes to help them earn more, go public or expand globally.   And today, choice is a primary factor when selecting an ERP solution. Businesses have more deployment options now than ever before, depending on their unique structures and needs. Whether the preference is on demand, cloud, hosted or on premise, a modular, scalable deployment is available to meet the need.   With modern ERP systems, business that once struggled to do more with fewer resources have access to the same quality tools as larger competitors. By adopting top tier ERP systems tailored to individual business needs, midsize companies can support business operations while creating an enterprise system that seamlessly scales up to fuel future growth. Meaning that the ERP decision that your company makes today, will have legs to serve your business for years to come.

    Read the article

  • Software engineering project idea feedback [on hold]

    - by Chris Sewell
    I'm a third year student currently undergoing my project/dissertation section of my degree. I have drafted a proposal for my final year project and would appreciate any feedback. The feedback can be anything constructive either specific to this proposal, the area that I will be working and researching in or my ideas. I will accept all input. Aims My aim is to attempt a proof of concept and prototype a runtime-as-a-service (RaaS). This cloud based runtime will allow clients to dynamically offload tasks or create cloud applications. Currently software-as-a-service (SaaS) cloud applications are purpose built and have a predefined scope in which they can assist or serve the client; this scope cannot be changed without physical alteration to the client and server software. With RaaS the client potentially could define any task it wanted at any time depending on its environment variables, the client and server would then communicate parameters and returns for that task. For the client to utilize a RaaS it must be able to conceive and then define a task using an appropriate XML vocabulary. As the scope of the cloud solution is defined by the client at its runtime, the cloud solution only has to exist for as long as the client requires it to as opposed to a client using a dedicated service. Deliverables The crux of the project will require an XML vocabulary in which the client and server will communicate. I’ll prototype the server application that will dynamically create and manage cloud solutions. The solution will be coded using an interpreted language, such as python or javascript, which can evaluate expressions in runtime or a language that can dynamically compile such as C# or Java. As a further proof of concept I will also produce a mock client that offloads tasks to the server. The report will attempt to explain the different flavours of cloud computing solutions including infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and SaaS including real world examples and where the use of a RaaS could have improved the overall example solution. Disclaimer: I'm not requesting stakeholders in my project nor am I delegating work. Any materials other than feedback, advice or directions will not be utilized.

    Read the article

  • Segmentation and Targeting: Your Tools for Personalizing the Online Customer Experience

    - by Christie Flanagan
    In order to deliver the kind of personalized and engaging online experiences that customers expect today, look to segmentation and targeting.  Segmentation is the practice of dividing your site visitors into distinct groups based on shared characteristics or behavior – for example, a segment may consist of site visitors who have visited pages related to certain product type, or they may consist of visitors within the same age group or geographic area.  The idea is that those within a segment are more likely to have common needs, problems or interests that can be served by your business. Targeting is the process by which the most relevant content, whether an article promotion or other piece of content, is delivered to your visitors based on their segment membership. Segmentation and targeting are used to drive greater engagement on your web presence by delivering content to your site visitors that is tailored to their interests, behavior or other attributes.  You may have a number of different goals for your segmentation and targeting efforts: Up-sell or cross-sell to your customers Conduct A/B testing on your offers and creative Offer discounts, promotions or other incentives for the time and duration that you specify Make is easier to find relevant information about products and services Create premium content model There are two different approaches you can take toward segmentation and targeting for you online customer experience initiatives. The first is more of a manual process, in which marketers manage the process of determining which segments to create and which content to target to those segments. The benefit of this approach is that it gives marketers a high level of control over the whole process which works well when you have a thorough understanding of your segments and which content is most likely to serve their needs.  Tools for marketer managed segmentation and targeting are often built right in to your WEM platform, as they are with Oracle WebCenter Sites. The downside is that the more segments and content that you have, the more time consuming and complicated in can be to manage manually.The second approach relies on predictive intelligence to automate the segmentation and targeting process.  This allows optimization of the process to occur in real time. This approach helps reduce the burden of manual segmentation and targeting and can result in new insights into segments that you may never have thought of on your own.  It also provides you with the capability to quickly test new offers and promotions on your site.  Predictive segmentation and targeting can be achieved by using Oracle WebCenter Sites and Oracle Real-Time Decisions together. *****Get a taste for how Oracle WebCenter Sites and Oracle Real-Time Decisions combine to deliver powerful capabilities for predictive segmentation and targeting by watching this on demand webcast introducing Oracle WebCenter Sites 11g or by reading IDC’s take on the latest release of Oracle’s web experience management solution.  Be sure to return to the Oracle WebCenter blog on Thursday for a closer look at how to optimize the online customer experience using these two products together.

    Read the article

  • Maximizing the Value of Software

    - by David Dorf
    A few years ago we decided to increase our investments in documenting retail processes and architectures.  There were several goals but the main two were to help retailers maximize the value they derive from our software and help system integrators implement our software faster.  The sale is only part of our success metric -- its actually more important that the customer realize the benefits of the software.  That's when we actually celebrate. This week many of our customers are gathered in Chicago to discuss their successes during our annual Crosstalk conference.  That provides the perfect forum to announce the release of the Oracle Retail Reference Library.  The RRL is available for free to Oracle Retail customers and partners.  It contains 1000s of hours of work and represents years of experience in the retail industry.  The Retail Reference Library is composed of three offerings: Retail Reference Model We've been sharing the RRM for several years now, with lots of accolades.  The RRM is a set of business process diagrams at varying levels of granularity. This release marks the debut of Visio documents, which should make it easier for retailers to adopt and edit the diagrams.  The processes represent an approximation of the Oracle Retail software, but at higher levels they are pretty generic and therefore usable with other software as well.  Using these processes, the business and IT are better able to communicate the expectations of the software.  They can be used to guide customization when necessary, and help identify areas for optimization in the organization. Retail Reference Architecture When embarking on a software implementation project, it can be daunting to start from a blank sheet of paper.  So we offer the RRA, a comprehensive set of documents that describe the retail enterprise in terms of logical architecture, physical deployments, and systems integration.  These documents and diagrams describe how all the systems typically found in a retailer enterprise work together.  They serve as a way to jump-start implementations using best practices we've captured over the years. Retail Semantic Glossary Have you ever seen two people argue over something because they're using misaligned terminology?  Its a huge waste and happens all the time.  The Retail Semantic Glossary is a simple application that allows retailers to define terms and metrics in a centralized database.  This initial version comes with limited content with the goal of adding more over subsequent releases.  This is the single source for defining key performance indicators, metrics, algorithms, and terms so that the retail organization speaks in a consistent language. These three offerings are downloaded from MyOracleSupport separately and linked together using the start page above.  Everything is navigated using a Web browser.  See the Oracle Retail Documentation blog for more details.

    Read the article

  • PASS Summit 2011: Save Money Now

    - by Bill Graziano
    Register by March 31st and save $200.  On April 1st we increase the price.  On July 1st we increase it again.  We have regular price bumps all the way through to the Summit.  You can save yourself $200 if you register by Thursday. In two years of marketing for PASS and a year of finance I’ve learned a fair bit about our pricing, why we do this and how you react to it.  Let me help you save some money! Price bumps drive registrations.  We see big spikes in the two weeks prior to a price increase.  Having a deadline with a cost attached is a great motivator to get people to take action. Registering early helps you and it helps PASS.  You get the exact same Summit at a cheaper rate.  PASS gets smoother cash flow and a better idea of how many people to expect.  We also get people that are already registered that will tell their friends about the conference. This tiered pricing lets us serve those that are very price conscious.  They can register early and take advantage of these discounts.  I know there are people that pay for this conference out of their own pockets.  This is a great way for those people to reduce the cost of the conference.  (And remember for next year that our cheapest pricing starts right after the Summit and usually goes up around the first of the year.) We also get big price bumps after we announce the program and the pre-conference sessions.  If you wrote down the 50 or so best known speakers in the SQL Server community I’m guessing we’ll have nearly all of them at the conference.  We did last year.  I expect we will this year too.  We’re going to have good sessions.  Why wait?  Register today. If you want to attend a pre-conference session you can always add it to your registration later.  Pre-con prices don’t change.  It’s very easy to update your registration and add a pre-conference session later. I want as many people as possible to attend the Summit.  It’s been a great experience for me and I hope it will be for you.  And if you are going to go, do yourself a favor and save some money.  Register today!

    Read the article

  • The Problem Should Define the Process, Not the Tool

    - by thatjeffsmith
    All around awesome tool, but not the only gadget in your toolbox.I’m stepping down from my SQL Developer pulpit today and standing up on my philosophical soap box. I’m frequently asked to help folks transition from one set of database tools over to Oracle SQL Developer, which I’m MORE than happy to do. But, I’m not looking to simply change the way people interact with Oracle database. What I care about is your productivity. Is there a faster, more efficient way for you to connect the dots, get from A to B, or just get home to your kids or to the pub for happy hour? If you have defined a business process around a specific tool, what happens when that tool ‘goes away?’ Does the business stop? No, you feel immediate pain until you are able to re-implement the process using another mechanism. Where I get confused, or even frustrated, is when someone asks me to redesign our tool to match their problem. Tools are just tools. Saying you ‘can’t load your data anymore because XYZ’ isn’t valid when you could easily do that same task via SQL*Loader, Create Table As Selects, or 9 other different mechanisms. Sometimes changes brings opportunity for improvement in the process. Don’t be afraid to step back and re-evaluate a problem with a fresh set of eyes. Just trying to replicate your process in another tool exactly as it was done in the ‘old tool’ doesn’t always make sense. Quick sidebar: scheduling a Windows program to kick off thousands if not millions of table inserts from Excel versus using a ‘proper’ server process using SQL*Loader and or external tables means sacrificing scalability and reliability for convenience. Don’t let old habits blind you to new solutions and possibilities. Of couse I’m not going to sit here and say that our tools aren’t deficient in some areas or can’t be improved upon. But I bet if we work together we can find something that’s not only better for the business, but is also better for you. What do you ‘miss’ since you’ve started using SQL Developer as your primary Oracle database tools? I’d love to start a thread here and share ideas on how we can better serve you and your organizations needs. The end solution might not look exactly what you have in mind starting out, but I had no idea I’d be a Product Manager when I started college either What can you no longer ‘do’ since you picked up SQL Developer? What hurts more than it should? What keeps you from being great versus just good?

    Read the article

  • what would be a good way to implement/render a 2d tiled map for a browser game?

    - by jj_
    I've made this little rpg ruby game I did while learning and now I'd like to make it into a browser game. I've already set up Sinatra framework to serve it, so what I am looking for, before everything else, is a way to represent the game map in browser (location attributes are stored in db). A new map is randomly generated by code for each new game at each game start. For now forget db, and let's say a map (say 100x100 "squares") is stored as a tridimensional array. (x,y, ...) Last "dimension" of array stores who & what is at that map cell: a player, a building, whatever. So all I have to do is render those "squares" or array cells to a 2d tiled map in the browser. The map does not need to refresh or be dynamically fetched as you scroll it, (at least at this stage of development) but, a technology which would allow me to do so in future would be a good reason for choosing it. Things that I thought of: html tables, html5 canvas, some js framework which is designed exactly with this purpose (which I do not know of = please advice). Yes I know about gamequery-js framework, but I've never used it, and I don't know if it's going to slow down everything down to inusability as I'm adding new features (scrolling, ajax). I really don't know of any other alternatives.. maybe there are lighter approaches? Easier or more minimalistic ways ? More targeted js framework which is the right tool for the job? Maybe just some html canvas code, or even simple image maps, or images with absolute positioning will be enough? The thing is I'd like to start simple, and then gradually make it better, so, as I said before, I'd prefer something that will give me room for improvement or is headed toward new web tendencies but which will also give me a bit of gratification in the beginning :) So.. advices are needed! And appreciated! :) Thanks p.s. Flash is excluded because I don't like it.

    Read the article

  • Spotlight on an office - Moscow

    - by Maria Sandu
    Probably the most famous place in Moscow, after Red Square, is the centre of Moscow. Here you can find beautiful buildings that seem to touch the sky, located on the banks of the river. In one of these high towers you can find the Oracle offices, friendly and modern. The stunning view will keep capture your attention for a couple of minutes and then you can enjoy a delicious coffee and take a seat at your desk, starting a new day. My name is Dmitry and I can tell you that we’re enjoying every minute spent in the office and that’s because of the pleasant atmosphere. As soon as you enter the offices, the friendly environment will make you feel more relaxed. Even though the space is split between the different departments, we interact and communicate a lot. We take our cup of coffee or tea together and discuss our achievements and all sort of subjects in the kitchen or in the open space. One of my favorite parts are the festive events when we celebrate with cakes and goodies. Any birthday or new arrival is a good reason for a tea party! We have some work-related traditions that help us as employees. One of them is the monthly Tech Hour when Experts from the Pre-sales team discuss technical topics and about the most recent innovations within the company. Lunch is another good opportunity to interact and chat. We have a variety of options, such as the two kitchens or the vast number of restaurants where you can serve up anything you want. As we are right in the centre of Moscow, you can choose between Sushi, Italian Pasta and all sorts of food. We usually go with our colleagues to have lunch. If you care about your health, I have very good news for you as nearby there are two first-class fitness centres with swimming pools, yoga and various sport classes that you can attend. My suggestion would be to either start or end your day with a visit to the swimming pool for a well-deserved hour of relaxation. As I mentioned before, we’re right in the heart of Moscow, so after work you can spend some time in the large shopping centers where you can choose between many different entertainment options. We often go to bowling or to the cinema. I hope I have given you a glimpse into working life at the Oracle offices in Moscow, a really great and pleasant place to work in, so follow us on http://campus.oracle.com for our latest vacancies and internships.

    Read the article

  • Supermicro BIOS recovery - SUPER.ROM

    - by Goyuix
    I have a Supermicro X9SCL+-F motherboard that I flashed a beta BIOS to, then the flash went bad when I tried to flash back to the latest stable. I am attempting to recover using their SUPER.ROM recovery from a flash drive without success. I read in the manual that if I hold down Ctrl +Home  while powering on the server I can do a BIOS recovery from a flash drive. I hold down those keys, hear the desired two beeps and I can see the activity LED on the flash drive activate. Unfortunately, instead of the monitor turning on and allowing a BIOS recovery as the manual indicates, I hear five beeps, followed shortly after by 3 beeps. I grabbed the latest BIOS from their site (x9scm2.508.zip) and extracted it to my flash drive and renamed it to SUPER.ROM. Their instructions are not clear if any ROM can serve as the SUPER.ROM file, or if I need a special SUPER.ROM file to initiate the recovery at which time I can supply a known good ROM. Does anyone have any expertise in ROM recovery for Supermicro boards? Am I missing some key step? Can any known good ROM file function as the SUPER.ROM file for recovery?

    Read the article

  • Windows 2008, IIS7 and virtual directories

    - by Thomas
    I created a virtual directory called test (C:\test) under the Default Web Site and added two simple test files (one html and one aspx). I thought I had to add the IUSR and NetworkService (for application pools) to C:\test and grant the users appropriate rights in order for IIS7 to serve the content. It appears that is not the case at all as I can view any files in the virtual directory (even if I convert it to an application) without changing or adding any security settings on the C:\test folder. I just installed IIS7 with ASP.NET on Windows 2008 without changing any settings besides adding the virtual directory. Am I missing something? Even my book on IIS7 states that the user accounts should be added an appropriate rights should be added. I added the following to answer the comments: I am referencing the file using a public IP http://xxx.xxx.xxx.xxx/test/one.html and the IP nor localhost is in my trusted sites. I am not signed in on the server at all as I am accessing the content from my home machine and the content is on my production server. The following users/groups have access to c:\test on the server (Creator Owner, System, Administrators, Users) and the app pool is running under the default NetworkService account. I basically installed win2008, added the IIS role with asp.net. I then opened IIS7, added a virtual directory and copied two files to the directory to test. It works which is great but I want to understand why it works. How is it that IIS7 can access files in the C:\test folder without any permissions set.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >