Search Results

Search found 14379 results on 576 pages for 'threat management gateway'.

Page 33/576 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Oracle Enterprise Manager 12c R3 introduces advancements in cloud lifecycle and operations management

    - by Anand Akela
    Oracle Enterprise Manager 12c Release 3 (R3) was announced ( Press Release ) earlier today. It is now available for download at  OTN . This latest release features improvements in several areas, including: Improvements to Private Cloud and Engineered Systems Management Expanded Middleware and Application Management Capabilities Efficiency Gains for Enterprise manager Users in EM’s Enterprise-Ready Framework You can learn more about what's new in the Oracle Enterprise Manager 12c R3 in the Enterprise Manager 12c documentation . You will see more blogs and details about the new features during the next few weeks. Please let us what On July 18th, you can join us at a webcast to hear Thomas Kurian, EVP of Product Development on what Oracle Engineering has achieved with Oracle Enterprise Manager 12c Release 3 to address these challenges. Later, during this webcast, Oracle experts will discuss the latest capabilities in Oracle Enterprise Manager 12c Release 3 for cloud lifecycle and operations management. The presentation will be followed by a live Q&A session with Oracle experts. You can also join us online on Twitter to get your specific questions answered. Please use hash tag #em12c to join the conversation. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Register Now for the Webcast! Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • .NET Dependency Management Systems

    - by StriplingWarrior
    I have some .NET projects that are starting to get large enough to merit looking into Dependency Management solutions, so we don't have to copy binaries from one project to another. Here's what I've found so far: NPanday is based on a port of Maven. I can't tell how recently it was worked on, but the last release was in May 2011. NuGet seems to be under active development, and it appears to have support directly from Microsoft. Some people complained that it "only addresses dependency resolution," but I don't know what else it should address, or whether it has added more features since that point. It does appear to have recently added the ability to import binaries as part of the build process so we don't have to commit them to our repositories. Refix appears to still be in Beta, after having received no attention since Sept 2011. Would somebody with recent experience using any of these dependency management tools (or any others that work well) share your experience? Is NuGet mature enough to use it for dependency management? If not, what does it lack?

    Read the article

  • What payment gateways do real customers really use when given the choice?

    - by ??????
    I would like to give customers the option of paying however they can whether that be through a proper gateway (e.g. SagePay) or through something else such as PayPal, Amazon Checkout or Google Checkout. Personally I have not bought anything through the Amazon Checkout except for on Amazon.co.uk and my PayPal buys have been limited. As for Google Checkout I have no idea what that is or how it works from a consumer perspective. I understand that people buying from smaller sites are happier to pay by PayPal as they have an account already and trust PayPal. As for Amazon Payments and Google Checkout, do people actually use them if given the choice? There are a lot of people on Kindles these days, happy to buy stuff via Amazon on their Kindle. Would Amazon Payments make sense to this growing crowd? With too many payment gateways on offer it might be confusing at the checkout. Does anyone know if this is a problem for genuine customers? I also have not seen many 'pay by Amazon Payments' icons on websites (you see PayPal all the time). Does advertising the fact that you can pay by Amazon Payments increase sales, e.g. to Kindle owners that have a nebulous book-buying account that 'their other half doesn't know about'?

    Read the article

  • Usual Suspects: Typical 3rd Party Entities in E-Commerce [closed]

    - by zharvey
    I am doing some requirements/analysis for a web app that I'd like to build (Ruby/Java developer here). This web app would have a store front, shopping cart and would need to be totally compliant with all e-com best practices. It's amazing how much non-technical info comes up when you search for phrases like "how does e-commerce work", but very little comes up in the way of technical details. As such, I'm having extreme frustration finding answers to what I consider pretty straight-forward questions. I came here because I believe this question is not off-topic; if it is, please leave a comment as to why this question does not belong here and I will happily remove it myself (upvotes if your comment can point me to the correct place for this question!). So then: What 3rd parties will I need to work with to have a modern, web-compliant e-com site? So far I can account for a payment gateway provider like Authorize.net and an SSL certificate provider like Trustwave. Any others? What other standards besides PCI compliance will I be held to (besides governing laws, of course!)? Vulnerability scans: PCI compliance requires quarterly scans: if I'm a "Level 4" (low volume) Merchant does that still apply to me? Irregardless, my backend architecture is quite huge, with web servers, app servers, database, message brokers and more. Do each of these servers need to be scanned?!? If not what servers do need to get these quarterly scans? I usually hate to ask micro-questions inside of one large one, but these are so closely-related I just felt like asking them all separately would be spamming the site with too many petty questions. Thanks in advance!

    Read the article

  • Oracle Transportation Management (Lead) Functional Consultant in Germany

    - by user769227
    My name is Giovanni and I lead the practice of OTM (Oracle Transportation Management) consultants in Western Europe. I currently have a role open for an OTM Lead Consultant to join my international team in Germany. Oracle Transportation Management is the leading TMS application software in the market, as confirmed by Gartner’s classification as LEADER of its TMS Magic Quadrant with the highest rating among vendors. The OTM Consulting practice is a team of OTM functional and technical specialists located across Europe whose broad objective is to assist companies in the implementation of their TMS solution based on OTM. These companies are leading Shippers of various industries and Logistic Service Providers. Key requirements for this role are: relevant experience with Supply Chain or Transportation Management in other consulting organizations or large enterprises, the drive to learn the leading TMS application software in today’s market and the interest to join a truly international team. We offer the opportunity to work for a leader of the IT Industry and assist international clients to realize their business transformation initiatives through innovation. If you have an entrepreneurial spirit, and are you looking for a work culture where innovation is the goal, hard work is expected, and creativity is rewarded then please visit this link for more information.

    Read the article

  • Internal but no external Citrix Access?

    - by leeand00
    We recently had to reload our configuration of Citrix on our server Server1, and since we have, we can access Citrix internally, but not externally. Normally we access Citrix from http://remote.xyz.org/Citrix/XenApp but since the configuration was reloaded we are met with a Service Unavailable message. Internally accessing the Citrix web application from http://localhost/Citrix/XenApp/ on Server1 we are able to access the web application. And also from machines on our local network using http://Server1/Citrix/XenApp/. I have gone into the Citrix Access Management Console and from the tree pane on the left clicked on Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/PNAgent Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/XenApp, which in both cases displays a screen that reads Secure client access. Here it offers me several options: Direct, Alternate, Translated, Gateway Direct, Gateway Alternate, Gateway Translated. I know that I can change the method of use by clicking Manage secure client access->Edit secure client access settings which opens a window that reads "Specify Access Methods", and below that reads "Specify details of the DMZ settings, including IP address, mask, and associated access method", I don't know what the original settings were, and I also don't know how our DMZ is configured so that I can specify the correct settings, to give access to our external users on the http://remote.xyz.org/Citrix/XenApp site. We have a vendor who setup our DMZ and does not allow us access to the gateway to see these settings. What sorts of questions should I ask them to restore remote access?

    Read the article

  • 2 nics. 2 Defaults Gateways

    - by andre.dias
    Here is my scenario: i have this server with 2 nics, each one with different IPs, connected to differents routers. Almost everything is configured whe way i need. Traffic coming from eth0 exits using eth0, traffic coming from eth1 exits using eth1. And there is a default gateway configured. $route: default IP 0.0.0.0 UG 0 0 0 eth0 With this configuration, the traffic generated in the server is going out using eth0 (lynx www.google.com for example). The problem is: the Internet link from eth0 went down today. The traffic coming from eth1 was ok...no problem. But the traffic generated in the server was a problem...the default gateway was out...no access do the Internet anymore (no more lynx www.google.com) So i added a new default gateway configuration, pointing to eth1. For 30 minutes i kept that way...2 default gateways, but just one was "working"...and everything was working just fine. But then i removed de eth0 gateway entry because, well, 2 default gateways is kind of weird. My question: is there any problem on keeping these 2 default gateways, one for each? So i don´t need to do nothing when one link go down again? $route: default IP1 0.0.0.0 UG 0 0 0 eth0 default IP2 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • How to setup Mac server to use two gateways

    - by Brady
    I recently asked this question: How to set Mac server to use different Gateway for internet bound traffic The answer given works but has presented me with another issue that I didnt make clear in that question. Here is my network layout as it stands: At the moment outside staff members use some services on the existing internet 1 link. Those services are hosted by the Mac server. If I change the gateway of the Mac server to the second modem those outside staff lose visabilty on those services. Now I dont know how to go about solving this issue. I want the second link to be used when the Mac server goes to rsync data offsite but everything else use link one. How do I do this? Thanks Scott EDIT: This has been resolved by setting the default gateway on the Mac server to 192.168.1.254 Thus leaving everything on the network as it was before. but to get the Mac server to use the other link for rsync I've added a route to the Mac server to route traffic to the rsync server through the second gateway. sudo route add -net {server IP's}/{Netmask} 192.168.1.1 I've awarded the answer to gravyface for pointing me to a post on how to make this route persistant in Mac

    Read the article

  • Connecting 2 different subnet masks

    - by Jonathan
    I'm no network genius, but I have managed to get most things running. I get confused about subnets and gateways though. We have an office server connected to around 20 PC's that all communicate fine. We have just gotten a cutting machine that won't connect to our network. The server has DHCP, but that fails on the cutting machine, so I've been trying to set the IP manually. Server details are as follows: IP: 10.1.1.12 SUBNET: 255.255.255.0 GATEWAY: 10.1.1.1 Internet connection is via the modem which is 10.1.1.1 An office PC is ussually set up through DHCP and has the following settings: IP: 10.1.1.36 SUBNET: 255.255.255.0 GATEWAY: 10.1.1.1 PRIMARY DNS: 10.1.1.12 Cutting Machine computer has 2 network ports. 1 is specifically for the communication between the PC and the cutting machine. It's details must be as follows: IP: 10.100.100.2 SUBNET: 255.255.255.252 GATEWAY: BLANK The other network port need to connect to the server. I was told that the IP and SUBNET need to be as follows: IP: 10.100.100.1 SUBNET: 255.255.255.252 GATEWAY: ?? How can I connect this port to the server and/or the internet. If anyone can offer assistance, it would really be appreaciated.

    Read the article

  • Most Innovative IDM Projects: Awards at OpenWorld

    - by Tanu Sood
    On Tuesday at Oracle OpenWorld 2012, Oracle recognized the winners of Innovation Awards 2012 at a ceremony presided over by Hasan Rizvi, Executive Vice President at Oracle. Oracle Fusion Middleware Innovation Awards recognize customers for achieving significant business value through innovative uses of Oracle Fusion Middleware offerings. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. This year’s Award honors customers for their cutting-edge solutions driving business innovation and IT modernization using Oracle Fusion Middleware. The program has grown over the past 6 years, receiving a record number of nominations from customers around the globe. The winners were selected by a panel of judges that ranked each nomination across multiple different scoring categories. Congratulations to both Avea and ETS for winning this year’s Innovation Award for Identity Management. Identity Management Innovation Award 2012 Winner – Avea Company: Founded in 2004, AveA is the sole GSM 1800 mobile operator of Turkey and has reached a nationwide customer base of 12.8 million as of the end of 2011 Region: Turkey (EMEA) Products: Oracle Identity Manager, Oracle Identity Analytics, Oracle Access Management Suite Business Drivers: ·         To manage the agility and scale required for GSM Operations and enable call center efficiency by enabling agents to change their identity profiles (accounts and entitlements) rapidly based on call load. ·         Enhance user productivity and call center efficiency with self service password resets ·         Enforce compliance and audit reporting ·         Seamless identity management between AveA and parent company Turk Telecom Innovation and Results: ·         One of the first Sun2Oracle identity management migrations designed for high performance provisioning and trusted reconciliation built with connectors developed on the ICF architecture that provides custom user interfaces for  dynamic and rapid management of roles and entitlements along with entitlement level attestation using closed loop remediation between Oracle Identity Manager and Oracle Identity Analytics. ·         Dramatic reduction in identity administration and call center password reset tasks leading to 20% reduction in administration costs and 95% reduction in password related calls. ·         Enhanced user productivity by up to 25% to date ·         Enforced enterprise security and reduced risk ·         Cost-effective compliance management ·         Looking to seamlessly integrate with parent and sister companies’ infrastructure securely. Identity Management Innovation Award 2012 Winner – Education Testing Service (ETS)       See last year's winners here --Company: ETS is a private nonprofit organization devoted to educational measurement and research, primarily through testing. Region: U.S.A (North America) Products: Oracle Access Manager, Oracle Identity Federation, Oracle Identity Manager Business Drivers: ETS develops and administers more than 50 million achievement and admissions tests each year in more than 180 countries, at more than 9,000 locations worldwide.  As the business becomes more globally based, having a robust solution to security and user management issues becomes paramount. The organizations was looking for: ·         Simplified user experience for over 3000 company users and more than 6 million dynamic student and staff population ·         Infrastructure and administration cost reduction ·         Managing security risk by controlling 3rd party access to ETS systems ·         Enforce compliance and manage audit reporting ·         Automate on-boarding and decommissioning of user account to improve security, reduce administration costs and enhance user productivity ·         Improve user experience with simplified sign-on and user self service Innovation and Results: 1.    Manage Risk ·         Centralized system to control user access ·         Provided secure way of accessing service providers' application using federated SSO. ·         Provides reporting capability for auditing, governance and compliance. 2.    Improve efficiency ·         Real-Time provisioning to target systems ·         Centralized provisioning system for user management and access controls. ·         Enabling user self services. 3.    Reduce cost ·         Re-using common shared services for provisioning, SSO, Access by application reducing development cost and time. ·         Reducing infrastructure and maintenance cost by decommissioning legacy/redundant IDM services. ·         Reducing time and effort to implement security functionality in business applications (“onboard” instead of new development). ETS was able to fold in new and evolving requirement in addition to the initial stated goals realizing quick ROI and successfully meeting business objectives. Congratulations to the winners once again. We will be sure to bring you more from these Innovation Award winners over the next few months.

    Read the article

  • connecting to server with multiple nics in other vlan

    - by Thierry
    I have a windows 2003 server with 3 nics on 3 vlan's (this is in domain 1). nic 1 has a default gateway to my router/firewall (sonicwall). In nic 2 and 3 I have left it empty, because it is advised like that everywhere. Within this domain and VLAN's 1-3 everything works fine. BUT... I have a second domain (domain 2) with a 4th Vlan (all 4 VLAN's connected to the same router/firewall) from which my clients need to access the 2003 server in domain 1 (it's my antivirus management console for both domains). when i ping the server from my vlan4 by it's FQDN, it randomly chooses ip from nic 1, 2 or 3 from my 2003 server. (logically because that server is know in DNS with it's 3 IP-addresses. And that is needed for my VLAN's 1-3) I don't really have a problem with that. BUT, I only get an answer of NIC1 (which sounds logically to me, because it's the only one with a gateway). It is not a router problem, because I'm testing in this phase and ping from vlan4 to any machine in vlan1, 2 or 3 that has 1 nic works just fine. If i add a gateway to nic2 and nic3, I get answer from all 3 nics and this works fine. But I know it's adviced to not do that. Can anyone give me advice in this particular case? Would it really be a problem to add a gateway to nic 2 and 3? They would be pointing to the same router/firewall (only with different ip-address, based on the vlan). Or is there another good solution to fix this problem? Thank's in advance, Thierry.

    Read the article

  • Linux as a router for public networks

    - by nixnotwin
    My ISP had given me a /30 network. Later, when I wanted more public ips, I requested for a /29 network. I was told to keep using my earlier /30 network on the interface which is facing ISP, and the newly given /29 network should be used on the other interface which connects to my NAT router and servers. This is what I got from the isp: WAN IP: 179.xxx.4.128/30 CUSTOMER IP : 179.xxx.4.130 ISP GATEWAY IP:179.xxx.4.129 SUBNET : 255.255.255.252 LAN IPS: 179.xxx.139.224/29 GATEWAY IP :179.xxx.139.225 SUBNET : 255.255.255.248 I have a Ubuntu pc which has two interfaces. So I am planning to do the following: eth0 will be given 179.xxx.4.130/30 gateway 179.xxx.4.129 eth1 will be given 179.xxx.139.225/29 And I will have the following in the /etc/sysctl.conf: net.ipv4.ip_forward=1 These will be iptables rules: iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT My clients which have the ips 179.xxx.139.226/29 and 179.xxx.139.227/29 will be made to use 179.xxx.139.225/29 as gateway. Will this configuration work for me? Any comments? If it works, what iptables rules can I use to have a bit of security? P.S. Both networks are non-private and there is no NATing.

    Read the article

  • Nginx + PHP-FPM = "Random" 502 Bad Gateway

    - by david
    I am running Nginx and proxying php requests via FastCGI to PHP-FPM for processing. I will randomly receive 502 Bad Gateway error pages - I can reproduce this issue by clicking around my PHP websites very rapidly/refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. Here is my setup: nginx/0.7.64 PHP 5.3.2 (fpm-fcgi) (built: Apr 1 2010 06:42:04) Ubuntu 9.10 (Latest 2.6 Paravirt) I compiled PHP-FPM using this ./configure directive ./configure --enable-fpm --sysconfdir=/etc/php5/conf.d --with-config-file-path=/etc/php5/conf.d/php.ini --with-zlib --with-openssl --enable-zip --enable-exif --enable-ftp --enable-mbstring --enable-mbregex --enable-soap --enable-sockets --disable-cgi --with-curl --with-curlwrappers --with-gd --with-mcrypt --enable-memcache --with-mhash --with-jpeg-dir=/usr/local/lib --with-mysql=/usr/bin/mysql --with-mysqli=/usr/bin/mysql_config --enable-pdo --with-pdo-mysql=/usr/bin/mysql --with-pdo-sqlite --with-pspell --with-snmp --with-sqlite --with-tidy --with-xmlrpc --with-xsl My php-fpm.conf looks like this (the relevant parts): ... <value name="pm"> <value name="max_children">3</value> ... <value name="request_terminate_timeout">60s</value> <value name="request_slowlog_timeout">30s</value> <value name="slowlog">/var/log/php-fpm.log.slow</value> <value name="rlimit_files">1024</value> <value name="rlimit_core">0</value> <value name="chroot"></value> <value name="chdir"></value> <value name="catch_workers_output">yes</value> <value name="max_requests">500</value> ... I've tried increasing the max_children to 10 and it makes no difference. I've also tried setting it to 'dynamic' and setting max_children to 50, and start_server to '5' without any difference. I have tried using both 1 and 5 nginx worker processes. My fastcgi_params conf looks like: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; Nginx logs the error as: [error] 3947#0: *10530 connect() failed (111: Connection refused) while connecting to upstream, client: 68.40.xxx.xxx, server: www.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com" PHP-FPM logs the follow at the time of the error: [NOTICE] pid 17161, fpm_unix_init_main(), line 255: getrlimit(nofile): max:1024, cur:1024 [NOTICE] pid 17161, fpm_event_init_main(), line 93: libevent: using epoll [NOTICE] pid 17161, fpm_init(), line 50: fpm is running, pid 17161 [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17162 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17163 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17164 started [NOTICE] pid 17161, fpm_event_loop(), line 111: ready to handle connections My CPU usage maxes out around 10-15% when I recreate the issue. My Free mem (free -m) is 130MB I had this intermittent 502 Bad Gateway issue when in was using php5-cgi to service my php requests as well. Does anyone know how to fix this?

    Read the article

  • What can I do to improve a project if there is a no-listening situation. Developers vs Management

    - by NazGul
    Hi all, I hope that I'm not the only one and I can get a answer from someone with more experience than me, so I can think cleaner and I don't get depressed with this developer's life. I'm working as developer for a small company three years now. In that three years I'm working in the same project and sincerely, I think this project could be used as a CASE STUDY because it has all the situations that cannot happen in a project and that makes a project fails. To begin with, and I believe you've already noticed, the project has 3 years already (develoment only) and is still unfinished, because in every meeting there is a "new priority" ,or a "new problem" to be solve or a "new feature" to be add. So, first problem is no target set. How can you know when something is finished if you don't know what you want? I understand Management, because they see an oportunity and try to get that, but I don't understand how can they not see (or hear us) that they'll lose all they already have and what they'll eventually get. Second, there is no team group. My team consists of three people, a Senior Developer, a DBA and, finally, I for all the work (support, testing, new features, bug fixing, meeting, projet management of clients, etc) aka Junior Developer. The first (senior developer), does not perform any tests on his changes, so, most of the time, his changes give us problems (us = me, since I'm the one who will fix it). The second (DBA) is an uncompromising person and you can not talk to him, believe me, I tried! In his view, everything he does is fantastic... even if it is the most complicated to make it... And he does everything he wants, even if we need that only for 5 months later and would help some extra-hand to do the things we have to do for now. As you can see, there is very hard to work with no help... Third, there is no testings. Every... I repeat, Every release of the project, the customers wants to kill us, because there is a lot of bugs. Management? They say that they want tests before the release. Us? We say the same. Time? No time. Management? There is always some time to open the application and click in some buttons. Us? Try to explain that it is not so simple. Management doesn't care... end of story. Actually, must of the bugs could be avoid with a rigorous work... Some people just want to do the show to the Management. "Did you ask for this? Cool, it's done. Bugs? The Do-all-the-work guy will solve." Unfortunally for me, sometimes the Do-all-the-work also has to finish it. And to makes this all better, I'm the person who will listen the complaints from the customers. Cool, huh? I know, everyone makes mistakes. But there is mistakes and mistakes... To complete, in the Management view, "the problem is the lack of an individual project management", because we cannot do all the stuff they ask, even if there is no PM for the project itself. And ask us to work overtime without any reward... I do say all this stuff to the management and others members, but by telling this, the I'm the bad guy, the guy who is complain when everything is going well... but we need to work overtime... sigh What can I do to make it works? Anyone has a situation like this, what did you do? I hope you could understand my problem, my English is a little rusty. Thanks.

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • New Release of Oracle EPM (Enterprise Performance Management)

    - by Theresa Hickman
    I'm a huge fan of Hyperion products and consider Hyperion to be one of the best acquisitions Oracle has made in terms of applications. So I am really excited to talk about their latest release, Release 11.1.2 of the Oracle EPM System. This is EPM's largest release in 2 years, and it's jam-packed with new modules and features. In terms of brand new products, there are three: 1. Public Sector Planning and Budgeting meets the needs of public sector agencies, higher education, governments, etc. that have complex budget requirements. It supports position or employee-based budgeting and integrates with MS Office and your ERP ledgers to perform commitment control. 2. Hyperion Financial Close Management is a complete financial close solution that orchestrates the entire close process from subledgers and general ledger to financial reporting and disclosure submissions. And of course, it is integrated with GL systems and consolidation systems. I saw a demo of this and it looked pretty slick. They have this unified close calendar that looks like a regular calendar that gives each person participating in the close process a task list. It comes with a Gantt chart that shows the relationships and dependencies among closing tasks. There are dashboards to allow you to track the close progress and completion of tasks as well as perform trend analysis and see how much time is being spent on different activities in the close process. This gives you visibility that you never had before to understand where the bottlenecks are and where improvements could be made. I think what I liked best about this product was that it provides a central place for all participants to communicate their progress. When I worked as an Accountant, we used ad hoc tools, such as spreadsheets, Word documents, emails, and phone calls during the close process. I like the idea of having a central system to track the overall progress as well as automate the entire financial close process. Who knows, maybe Accountants won't have to revolve their lives around the month end close anymore with a tool like this. Those periodic fire drills can become predictable, well managed processes. 3. Disclosure Management is an out-of-the-box, pre-packaged XBRL solution to meet statutory reporting requirements. This product is really going to help companies improve the timeliness of producing financial reports. Reports can be authored using MS Word and Excel and then XBRL instance documents can be produced with its embedded XBRL tags. It even supports footnotes and disclosures of non-financial information. With a product like this, companies no longer have to outsource their XBRL filing; they can bring it back in house to save costs and time. In terms of other enhancements, they have ERP Integrator that provides integration and drill downs from Hyperion products to source systems, such as Oracle E-Business Suite, PeopleSoft, and SAP. No other vendor offers this level of integration. There's also a new product that links Oracle Essbase directly to Hyperion Financial Management for internal financial reporting, and new integrations between Hyperion Financial Management and Oracle's GRC products. They also improved the usability of Oracle Hyperion Planning. They made it much easier for end users to use the system via the web or via MS Excel when submitting plans and budgets. It is also integrated with intelligent approval workflows that are data-driven, user-configurable, and scenario-specific to efficiently streamline the budgeting process. Here's the press release from April 7, 2010. Here's the pre-recorded web cast where you can see the demos. Just register and watch the hour long presentation. And finally, here's the newsletter

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • Centralized Project Management Brings Needed Cost Controls to Growing Brazilian Firm

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Fast growth and a significant increase in business activities were creating project management challenges for CPqD, a developer of innovative information and communication technologies for large Brazilian organizations. To bring greater efficiency and centralized project management capabilities to its operations, CPqD chose Oracle’s Primavera P6 Enterprise Project Portfolio Management. “Oracle Primavera is an essential tool for our day-to-day business, and I notice the effort Oracle makes to constantly innovate and to add more functionality in an increasingly shorter period of time,” says Márcio Alexandre da Silva, IT department project coordinator, CPqD. He explains that before CPqD implemented the Oracle solution, the company did not have a corporate view of projects. “Our project monitoring was decentralized and restricted to each coordinator,” the project coordinator says. “With the Oracle solution, we achieved actual shared management, more control, and budgets that stay within projections.” Among the benefits that CPqD now enjoys are The ability to more effectively identify how employees are allocated, enabling managers to increase or reduce resources based on project scope, as well as secure the resources required for unexpected projects and demands A 75 percent reduction in the time it takes to collect project data and indicators—automated and centralized collection means project coordinators no longer have to manually compile information that was spread among various systems Read the complete CPqD company snapshot Read more in the October Edition of the quarterly Information InDepth EPPM Newsletter Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Oracle Fusion Supply Chain Management (SCM) Designs May Improve End User Productivity

    - by Applications User Experience
    By Applications User Experience on March 10, 2011 Michele Molnar, Senior Usability Engineer, Applications User Experience The Challenge: The SCM User Experience team, in close collaboration with product management and strategy, completely redesigned the user experience for Oracle Fusion applications. One of the goals of this redesign was to increase end user productivity by applying design patterns and guidelines and incorporating findings from extensive usability research. But a question remained: How do we know that the Oracle Fusion designs will actually increase end user productivity? The Test: To answer this question, the SCM Usability Engineers compared Oracle Fusion designs to their corresponding existing Oracle applications using the workflow time analysis method. The workflow time analysis method breaks tasks into a sequence of operators. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time can be calculated. The workflow time analysis method has been recently adopted by the Applications User Experience group for use in predicting end user productivity. Using this method, a design can be tested and refined as needed to improve productivity even before the design is coded. For the study, we selected some of our recent designs for Oracle Fusion Product Information Management (PIM). The designs encompassed tasks performed by Product Managers to create, manage, and define products for their organization. (See Figure 1 for an example.) In applying this method, the SCM Usability Engineers collaborated with Product Management to compare the new Oracle Fusion Applications designs against Oracle’s existing applications. Together, we performed the following activities: Identified the five most frequently performed tasks Created detailed task scenarios that provided the context for each task Conducted task walkthroughs Analyzed and documented the steps and flow required to complete each task Applied standard time estimates to the operators in each task to estimate the overall task completion time Figure 1. The interactions on each Oracle Fusion Product Information Management screen were documented, as indicated by the red highlighting. The task scenario and script provided the context for each task.  The Results: The workflow time analysis method predicted that the Oracle Fusion Applications designs would result in productivity gains in each task, ranging from 8% to 62%, with an overall productivity gain of 43%. All other factors being equal, the new designs should enable these tasks to be completed in about half the time it takes with existing Oracle Applications. Further analysis revealed that these performance gains would be achieved by reducing the number of clicks and screens needed to complete the tasks. Conclusions: Using the workflow time analysis method, we can expect the Oracle Fusion Applications redesign to succeed in improving end user productivity. The workflow time analysis method appears to be an effective and efficient tool for testing, refining, and retesting designs to optimize productivity. The workflow time analysis method does not replace usability testing with end users, but it can be used as an early predictor of design productivity even before designs are coded. We are planning to conduct usability tests later in the development cycle to compare actual end user data with the workflow time analysis results. Such results can potentially be used to validate the productivity improvement predictions. Used together, the workflow time analysis method and usability testing will enable us to continue creating, evaluating, and delivering Oracle Fusion designs that exceed the expectations of our end users, both in the quality of the user experience and in productivity. (For more information about studying productivity, refer to the Measuring User Productivity blog.)

    Read the article

  • Reaching to the Holy Grail of Data Management

    - by Irem Radzik
    Pervasive, continuous access to trusted data. That’s the ultimate goal of data management. It enables to leverage data as an asset to create value for customers and the organization. It creates the strong foundation needed to move the business forward. How you get there is also critical. As with all IT initiatives using high performance solutions with low cost of ownership is another key requirement in today’s IT world. Oracle's  data integration product strategy focuses on helping customers achieve this ultimate goal with high performance and low TCO.  At OpenWorld, we will be showing how Oracle Data Integration products help you reach your data management goals, considering new trends in information management, such as big data and cloud computing. We will also provide an update on the latest product releases, such as Oracle GoldenGate 11gR2. If you will be at OpenWorld, please join us on Monday Oct 1st 10:45am at Moscone West – 3005 to hear our VP of Product Development, Brad Adelberg, present "Future Strategy, Direction, and Roadmap of Oracle’s Data Integration Platform". The Data Integration track at OpenWorld covers variety of topics and speakers. In addition to product management of Oracle GoldenGate, Oracle Data Integrator, and Enteprise Data Quality presenting product updates and roadmap, we have several customer panels and stand-alone sessions featuring select customers such as St. Jude Medical, Raymond James, Aderas, Turkcell, Paychex, Comcast,  Ticketmaster, Bank of America and more. You can see an overview of Data Integration sessions here. If you are not able to attend OpenWorld, please check out our latest resources for Data Integration and Oracle GoldenGate. In the coming weeks you will see more blogs about our products’ new capabilities and what to expect at OpenWorld. I hope to see you at OpenWorld and stay in touch via our future blogs. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Master Data Management and Cloud Computing

    - by david.butler(at)oracle.com
    Cloud Computing is all the rage these days. There are many reasons why this is so. But like its predecessor, Service Oriented Architecture, it can fall on hard times if the underlying data is left unmanaged. Master Data Management is the perfect Cloud companion. It can materially increase the chances for successful Cloud initiatives. In this blog, I'll review the nature of the Cloud and show how MDM fits in.   Here's the National Institute of Standards and Technology Cloud definition: •          Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.   Cloud architectures have three main layers: applications or Software as a Service (SaaS), Platforms as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS generally refers to applications that are delivered to end-users over the Internet. Oracle CRM On Demand is an example of a SaaS application. Today there are hundreds of SaaS providers covering a wide variety of applications including Salesforce.com, Workday, and Netsuite. Oracle MDM applications are located in this layer of Oracle's On Demand enterprise Cloud platform. We call it Master Data as a Service (MDaaS). PaaS generally refers to an application deployment platform delivered as a service. They are often built on a grid computing architecture and include database and middleware. Oracle Fusion Middleware is in this category and includes the SOA and Data Integration products used to connect SaaS applications including MDM. Finally, IaaS generally refers to computing hardware (servers, storage and network) delivered as a service.  This typically includes the associated software as well: operating systems, virtualization, clustering, etc.    Cloud Computing benefits are compelling for a large number of organizations. These include significant cost savings, increased flexibility, and fast deployments. Cost advantages include paying for just what you use. This is especially critical for organizations with variable or seasonal usage. Companies don't have to invest to support peak computing periods. Costs are also more predictable and controllable. Increased agility includes access to the latest technology and experts without making significant up front investments.   While Cloud Computing is certainly very alluring with a clear value proposition, it is not without its challenges. An IDC survey of 244 IT executives/CIOs and their line-of-business (LOB) colleagues identified a number of issues:   Security - 74% identified security as an issue involving data privacy and resource access control. Integration - 61% found that it is hard to integrate Cloud Apps with in-house applications. Operational Costs - 50% are worried that On Demand will actually cost more given the impact of poor data quality on the rest of the enterprise. Compliance - 49% felt that compliance with required regulatory, legal and general industry requirements (such as PCI, HIPAA and Sarbanes-Oxley) would be a major issue. When control is lost, the ability of a provider to directly manage how and where data is deployed, used and destroyed is negatively impacted.  There are others, but I singled out these four top issues because Master Data Management, properly incorporated into a Cloud Computing infrastructure, can significantly ameliorate all of these problems. Cloud Computing can literally rain raw data across the enterprise.   According to fellow blogger, Mike Ferguson, "the fracturing of data caused by the adoption of cloud computing raises the importance of MDM in keeping disparate data synchronized."   David Linthicum, CTO Blue Mountain Labs blogs that "the lack of MDM will become more of an issue as cloud computing rises. We're moving from complex federated on-premise systems, to complex federated on-premise and cloud-delivered systems."    Left unmanaged, non-standard, inconsistent, ungoverned data with questionable quality can pollute analytical systems, increase operational costs, and reduce the ROI in Cloud and On-Premise applications. As cloud computing becomes more relevant, and more data, applications, services, and processes are moved out to cloud computing platforms, the need for MDM becomes ever more important. Oracle's MDM suite is designed to deal with all four of the above Cloud issues listed in the IDC survey.   Security - MDM manages all master data attribute privacy and resource access control issues. Integration - MDM pre-integrates Cloud Apps with each other and with On Premise applications at the data level. Operational Costs - MDM significantly reduces operational costs by increasing data quality, thereby improving enterprise business processes efficiency. Compliance - MDM, with its built in Data Governance capabilities, insures that the data is governed according to organizational standards. This facilitates rapid and accurate reporting for compliance purposes. Oracle MDM creates governed high quality master data. A unified cleansed and standardized data view is produced. The Oracle Customer Hub creates a single view of the customer. The Oracle Product Hub creates high quality product data designed to support all go-to-market processes. Oracle Supplier Hub dramatically reduces the chances of 'supplier exceptions'. Oracle Site Hub masters locations. And Oracle Hyperion Data Relationship Management masters financial reference data and manages enterprise hierarchies across operational areas from ERP to EPM and CRM to SCM. Oracle Fusion Middleware connects Cloud and On Premise applications to MDM Hubs and brings high quality master data to your enterprise business processes.   An independent analyst once said "Poor data quality is like dirt on the windshield. You may be able to drive for a long time with slowly degrading vision, but at some point, you either have to stop and clear the windshield or risk everything."  Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed. This will in turn insure that expected returns on the investment in Cloud Computing will be realized.       Oracle MDM has proven its metal in this area and has the customers to back that up. In fact, I will be hosting a webcast on Tuesday, April 10th at 10 am PT with one of our top Cloud customers, the Church Pension Group. They have moved all mainline applications to a hosted model and use Oracle MDM to insure the master data is managed and cleansed before it is propagated to other cloud and internal systems. I invite you join Martin Hossfeld, VP, IT Operations, and Danette Patterson, Enterprise Data Manager as they review business drivers for MDM and hosted applications, how they did it, the benefits achieved, and lessons learned. You can register for this free webcast here.  Hope to see you there.

    Read the article

  • Cisco VPN error 403: Unable to contact the security gateway

    - by mtashev
    I'm trying to make a connection via Cisco VPN Client (version is 5.0.07.0290), but i get the below mentioned error. I have to say that i'm using Windows 8. "Secure VPN Connection terminated locally by the Client. Reason 403: Unable to contact the security gateway." I've tried several fixes, but none of them worked. The display name in regedit is the correct, and my certificates are ok as well. If i switch to TCP (the default is UDP) i get error 414. Firewall is off. Any suggestions will be appreciated.

    Read the article

  • Gateway time out connecting to tethered server from Android

    - by BentFX
    I've got an Android device running android-wifi-tether. It works as advertised. I connect to it from my Ubuntu 12.04 laptop running Apache 2.2.22. The laptop is manually configured to IP 192.168.2.100 in the hosts file. It can ping itself and access it's own web server through that address. The WiFi tether hotspot gives the laptop the same 192.168.2.100 address(Laptop was configured to match the hotspot address as a troubleshooting step, and could be wrong.) Using ping I can ping the laptop from the phone using the 192.168.2.100 address. Using portscan the phone shows port 80 open on the 192.168.2.100 address. So, everything looks like it's in place, but any attempt to browse to http://192.168.2.100 fails after a few moments with a 504(Gateway time out) Any help would certainly be help.

    Read the article

  • Looking for a good free or cheap task tracking system

    - by JWood
    I've finally decided that pen and paper/whiteboards are not up to the job as my workload increases so I'm looking for a good task tracking system. I need something that can track tasks in categories (projects) and allow me to assign priority to each task. I've tried iTeamWork which requires projects to have an end time which is no good for me as at least one of my projects is ongoing. I also tried Teamly which was required tasks to be set to a specific day which is no good as tasks sometimes take more than a day and I would like them organised by priority rather than specific days. Preferably looking for something hosted but I'm happy to install on our servers if it supports PHP/MySQL. Oh, and an iPhone client would be the icing on the cake! Can anyone recommend anything?

    Read the article

  • Project and Business Document Organization

    - by dassouki
    How do you organize, maintain edits, revisions and the relationship between: Proposals Contracts Change Orders Deliverables Projects How do you organize your projects for re-usability? For example, is there a way to add tags to projects, to make them more accessible? What's a good data structure to dump all my files on an internet server for easy access? Presently, my work folder is setup as follows: (1)/work/ (2)/projects (3)/project_a (4)/final (which includes all final documents) (5)/contracts (5)/rfp_rfq (5)/change_orders (5)/communications (logs all emails, faxes, and meeting notes and minutes) (5)/financial (6)/paid (6)/unpaid (5)/reports (4)/old (include all documents that didn't make it into the project_a/final/ (3)/project_b (4) ... same as above ... (2)/references (3)/technical_references (3)/gov_regulations (3)/data_sources (3)/books (3)/topic_based (each area of my expertise has a folder with references in them) (2)/business_contacts (3)/contacts.xls (file contains all my contacts) (2)/banking (3)/banking.xls (contains a list of all paid and unpaid invoices as well as some cool stats) (3)/quicken (to do my taxes and yada yada) (4)/year (2)/education (courses I've taken (3)/webinars (3)/seminars (3)/online_courses (2)/publications (includes the publications I've made (3)/publication_id We're mostly 5 people working together part-time on this thing. Since this is a very structured approach, I find it really difficult to remember what I've done on previous projects and go back and forth easily. What are your suggestions on improving my processes? I'm open to closed and open source software (as long as the price isn't too high). I also want to implement a system where I can save most of the projects online to increase collaboration and efficiency and reduce bandwidth especially on document editing. Imagine emailing a document back and forth 5-10 times a day.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >