Search Results

Search found 3884 results on 156 pages for 'personal growth'.

Page 119/156 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Windows 7 libraries and folder redirection nightmare

    - by Lobuno
    Hello! In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Windows 7 libraries and folder redirection nightmare

    - by Lobuno
    Hello! In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Port translation in router causing some email to fail

    - by user22037
    We are in the process of setting up a spam filter (SAVASM). One change we are making is to push incoming email on port 25 through our spam filter/server but have users actually send their email on a different port. I am attempting to make this happen by using port address translation to send port 25 traffic to the SAVASM server IP. As a step in making this change I setup port translation without actually changing the IP addresses. The NAT rules for the email server went from one Static NAT rule with no port specified, to multiple Static NAT rules each with a port or group matching the Access Rules for that server (smtp, pop3, http, https, and some other custom ports). The problem we are running into is confusing. Some outgoing mail through this server is failing when the router has the multiple NAT rules with port translation settings. Email goes through fine FROM our email to our internal accounts and to Gmail. However email fails when FROM our client's email address TO our client's email or their personal Comcast. The only situation that worked for them was if they changed FROM to Comcast and then messages went through fine to both Comcast and the client's accounts. Switching back to regular Static NAT rule everything then worked for them. Does anyone have a clue as to what might be going on? We are on a Cisco ASA 5500 box.

    Read the article

  • Syncronization between folders MAC OS Lion

    - by Andre Carvalho
    I have an iMac at home and I use a Macbook pro for work. I also have a time capsule at home containing my main folder with my main files. I use it as a NAS besides the Time Machine backup tool. I have several personal files I need to be accessing both at home and at work. My wife, who works at home, uses sometimes the same .XLS files and .DOC files I might have used during my day at work, away from home. My question is: Is there a software, or tool that a I can use to sync my iMac and my MB Pro folders? Remembering that: There might be a chance that my wife and I have changed the same files during the day, so the files would have to be merged so none of the information added by either me or my wife would be lost. The software/tool that would be installed on the MB Pro would need to mount the Time Capsule volume so it could locate the main folder on it. It has to be done automatically when my MB is at home ( with a schedule option ); I have tested some softwares like synctwofolders and Chronosync but none fulfilled all my needs. The first couldn't mount the Time Capsule Volume and didn't have the many schedule options. I really liked Chronosync, but it doesn't merge the files. When it detects a conflict ( for instance: my wife changed a .DOC file on the iMAC and I changed the same file on the MB it asks you to choose which version you want to keep instead of allowing you simply to merge them ). I don't have much experience with automator or scripts but maybe you can give me a hand with that.

    Read the article

  • Hard drive failed, suspected filesystem corruption, still cannot salvage any data from harddrive

    - by Hippy-Head
    Firstly, I am terribly sorry if this is a duplicate, but I couldn't find a similar issue to mine, so here goes. I have a 1TB hdd bought around 8 months ago used as backup hard drive. I have not used the drive for a period of time whatsoever, and when I was trying to get back to some files on it, it was completely wiped just like that. At first it would not boot I tried everything from command line chkdsk and filesystem recovery software to rebuilt it. After a few attempts I managed to initialize it, at that time it was an achievement. The problems started when I tried to recover the data inside, I have used A LOT of software free and commercial software on both Mac and Windows, with the help of cmd or Terminal commands, however no data of any kind was recovered, even after leaving it thoroughly scan for around 9-10 hours all night sometimes longer, with no results at all. I am somewhat desperate, I am usually good at retrieving data from corrupt hard drives, but this is not the case. Call me paranoid, but I do not want to give it to someone to fix it for me, as I have a lot of photos and personal stuff that I do not want anyone to see.

    Read the article

  • RAID 5 Install on Ubuntu Server 12.04 [closed]

    - by tarabyte
    Environment: Ubuntu Server 12.04, installing from bootable flash drive Error: No root file system is defined. Please correct this from the partitioning menu. I'm trying to set up a personal file server with software RAID 5. I just got three hard drives for this, but haven't found any solid documentation. I'm unsure what the basic way to partition my hard drives is. Can someone upload a screenshot of their "partition disks" screen so that I can compare with mine (attached)? Should I set the bootable flag? Do I need a /home partition? A /boot partition? Should I "Use [my partition] as: Ext4 journaling file system"? Or make that field "physical volume for RAID"? I am an engineer, but I have only a cursory knowledge of all-things-linux. If you know of any good learning resources I'd be happy to hear about those too (that way I don't have to blindly follow deprecated tutorials online). well, image would be here but i don't have a high enough reputation yet (please vote up :)) Thank you, References I've looked into: https://help.ubuntu.com/community/Installation/SoftwareRAID https://help.ubuntu.com/12.04/serverguide/advanced-installation.html http://forevergeeks.com/setup-ubuntu-server-with-raid-5/

    Read the article

  • Help Email Account Management among multiple users

    - by CogitoErgoSum
    So I preface this with saying this may belong in IT Security, not too sure feel free to move. Currently we have an email account [email protected] - hosted via google apps (as is all our email). We had an incident where we had to terminate an employee. This employee however had the password for this account as we have 20-30 people utilizing it at any given point to manage customer emails etc. Thinking on this I feel there must be a better way to manage access. With Google you can associate upto 10 email accounts to another the problem is we have more like 20-30 people going. We were evaluating tools such as SalesForce and Assistly where people have their own login credentials and then the system contains the appropriate smtp information for the [email protected] email address to send emails from it rather than a users personal account. Aside from those options does anyone have any other thoughts? One suggestion floated was moving everyone to desktop clients and saving the PW info there so they could only login from their physical workstation but we may have situations where we'd like employees to work remotely. Does anyone have experience with this sort of system where ~20-30 people are responding from one email box and how to manage security and access?

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • How can I disable the CTRL-ALT-DEL key combination completely on XP/Vista/7?

    - by Travesty3
    I have been googling extensively to figure this out, and nobody seems to be able to give a direct answer. Let me start by saying that I'm NOT talking about requiring CTRL-ALT-DEL to enter logon information. I'm working on a golf simulator program which is used at golf centers. I need the ability to completely disable the CTRL-ALT-DEL key sequence so that the golf center customers can't get out of the program and access the computer at all. I realize there are other key combinations that need to be handled as well, we already have this entire feature working in XP, but we're going to be switching to Windows 7 soon, and CTRL-ALT-DEL is the only one that doesn't seem to work in Win7. I'd really like an all-around solution if at all possible. This same program may also be installed on a client's personal computer for an in-home golf simulator, but the computers that really need this feature (golf center computers) are provided to the golf center by us, so would the best option be to write a new shell? I don't know anything about that at all, other than others that suggest writing a new shell for kiosk mode. I'd really like a simpler option, like modifying the registry in some way. I have heard that you can remove some buttons from the menu screen that pops up, but unless I can remove pretty much all of them (including the shutdown/restart button in the bottom-right corner), this won't be enough of a solution for me. Thanks for taking the time to read this and thanks again for any help you could provide! -Travis

    Read the article

  • How do I determine whether this email bounce is my fault?

    - by David Zaslavsky
    I use Google Apps to handle email for my personal website, so I have an email address [email protected] through that, and I also have a Gmail account [email protected]. Now, I've been trying to send emails to a particular recipient who shall be known as [email protected]. When I send the email from my Gmail account with the @gmail.com address, it works fine. However, when I send it from my Google Apps account with the @ellipsix.net address, I get a bounce message which includes the following text: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 mail server permanently rejected message (#5.3.0) (state 17). The bounce message suggests that it is up to the mail administrator of the recipient domain example.com to fix the problem, whatever it is. But I would like to be as sure as possible that nothing needs to be fixed on my end. I already have DKIM signatures enabled for my domain, and I have published an SPF DNS record. Is there something else I should check or do, or can I be confident that it's up to the recipient to fix this issue? Does the "state 17" in the bounce message mean something relevant? I've included my domain name in the question so people who know more than me about this stuff can independently check the relevant DNS records or other information. This other question seems similar, but I've already investigated everything suggested in the answers there (except for contacting Google, which I don't want to do unless I suspect it's their issue to fix).

    Read the article

  • Why can a local root turn into any LDAP user?

    - by Daniel Gollás
    I know this has been asked here before, but I am not satisfied with the answers and don't know if it's ok to revive and hijack an older question. We have workstations that authenticate users on an LDAP server. However, the local root user can su into any LDAP user without needing a password. From my perspective this sounds like a huge security problem that I would hope could be avoided at the server level. I can imagine the following scenario where a user can impersonate another and don't know how to prevent it: UserA has limited permissions, but can log into a company workstation using their LDAP password. They can cat /etc/ldap.conf and figure out the LDAP server's address and can ifconfig to check out their own IP address. (This is just an example of how to get the LDAP address, I don't think that is usually a secret and obscurity is not hard to overcome) UserA takes out their own personal laptop, configures authentication and network interfaces to match the company workstation and plugs in the network cable from the workstation to their laptop, boots and logs in as local root (it's his laptop, so he has local root) As root, they su into any other user on LDAP that may or may not have more permissions (without needing a password!), but at the very least, they can impersonate that user without any problem. The other answers on here say that this is normal UNIX behavior, but it sounds really insecure. Can the impersonated user act as that user on an NFS mount for example? (the laptop even has the same IP address). I know they won't be able to act as root on a remote machine, but they can still be any other user they want! There must be a way to prevent this on the LDAP server level right? Or maybe at the NFS server level? Is there some part of the process that I'm missing that actually prevents this? Thanks!!

    Read the article

  • Improving abysmal 802.11n wireless network

    - by concept
    I am in desperate need of help to improve the abysmal performance of my 802.11n wireless network. At best I get 30Mbs (this is an internet download) from a technology that boasts 300Mbs, even worse is the LAN where to date best i have ever gotten is 1Mbs. It is literally quicker to copy the file to a USB and walk it to the other computer. Infrastructure is this AP 802.11n only broadcasting at both 2.4GHz and 5GHz Mac with 802.11a/b/g/n card is connected to the AP via 5GHz Linux with 802.11a/b/g/n card is connected to AP via 2.4GHz I have conducted the following tests (results at end of post) Internet based speed test wired and wireless LAN file copy wired and wireless I have read: http://nutsaboutnets.com/troubleshooting-wi-fi-problems/ http://www.smallnetbuilder.com/wireless/wireless-basics/30664-5-ways-to-fix-slow-80211n-- speed http colon //www.wi-fiplanet dot com/tutorials/7-tips-to-increase-wi-fi-performance.html Slow file transfer on network between two 802.11n laptops (connected directly together via access point) Wireless Network Performance Issues Slower than expected 802.11n wireless network speeds I have made the following optimizations AP broadcasts only 802.11n on both 2.4GHz and 5GHz frequencies 2.4GHz is on a channel with least interference (live in an apartment with lots of APs), this did make a 10Mb/sec improvement Our AP is the only one transmitting on the 5GHz freq. Security: WPA Personal WPA2 AES encryption Bandwidth: 20MHz / 40MHz (i assume this to be channel bonding) I have tried the following with 0 improvement Dropped the Fragment Threshold to 512 Dropped the Request To Send (RTS) Threshold to 512 and 1 Even thought of buying a frequency spectrum analyzer, until i saw the cost of them!!! Speed test results Linux Wired: DOWNLOAD 128.40Mb/s UPLOAD 10.62Mb/s www dot speedtest dot net/my-result/2948381853 Mac Wired: DOWNLOAD 118.02Mb/s UPLOAD 10.56Mb/s www dot speedtest dot net/my-result/2948384406 Linux Wireless: DOWNLOAD 23.99Mb/s UPLOAD 10.31Mb/s www.speedtest dot net/my-result/2948394990 Mac Wireless: DOWNLOAD 22.55Mb/s UPLOAD 10.36Mb/s www.speedtest dot net/my-result/2948396489 LAN NFS 53,345,087 bytes (51Mb) file Linux Mac NFS Wired: 65.6959 Mb/sec Linux Mac NFS Wireless: .9443 Mb/sec All help is appreciated, even testing methods will be accepted.

    Read the article

  • How to Refresh or Reset Windows 8 without the System Reserved partition?

    - by Karan
    The article Refresh and reset your PC mentions exactly what happens during the refresh and reset operations in Windows 8: Refresh The PC boots into Windows RE. Windows RE scans the hard drive for your data, settings, and apps, and puts them aside (on the same drive). Windows RE installs a fresh copy of Windows. Windows RE restores the data, settings, and apps it has set aside into the newly installed copy of Windows. The PC restarts into the newly installed copy of Windows. Reset The PC boots into the Windows Recovery Environment (Windows RE). Windows RE erases and formats the hard drive partitions on which Windows and personal data reside. Windows RE installs a fresh copy of Windows. The PC restarts into the newly installed copy of Windows. It is my understanding that Windows RE (Recovery Environment) is included as part of the System Reserved partition created by default on the first hard disk. The size of this partition has gone up to 350 MB from the 100 MB it used to be in Vista/Windows 7, no doubt as a result of adding these features. Now we have already discussed how to skip the creation of this System Reserved partition during Setup. Basically, the same techniques that used to work with Windows 7 work with Windows 8 as well. What I want to know is, what will be the exact repercussions of not having the System Reserved partition in place? I assume Troubleshoot / Advanced options should still be available as before: But what about the Troubleshoot menu itself? Will the Refresh and Reset options disappear? Will they remain but be unavailable? Or possibly they will throw an error if selected? Also, will it be possible to access and successfully execute these options if installation media is available? Anything else that might be affected?

    Read the article

  • VPS with Plesk, one ip, and godaddy (definely need help)

    - by Francesco
    Hi there, here's my situation : i've Plesk 8.3.0 with one IP and i've registered my domains at godaddy.com My problem : i cannot figure out how to configure plesk and godaddy to have my domains (6) properly working into the VPS. i've only one IP, so i can't have my personal NS and need to use godaddy ns. But.. how do i set all the stuff ? I've made a try but it's not working. Please take a look : This is an example of how the domain i'm actually working on is configured On Plesk : Host Record type Value 1.2.3.4 / 24 PTR mydomain.com. ftp.mydomain.com. CNAME mydomain.com. mail.mydomain.com. A 1.2.3.4 ns.mydomain.com. A 1.2.3.4 mydomain.com. NS ns.mydomain.com. mydomain.com. A 1.2.3.4 mydomain.com. MX (10) mail.mydomain.com. webmail.mydomain.com. A 1.2.3.4 www.mydomain.com. CNAME mydomain.com. On godaddy,(Total DNS Control) for the same domain i have this setup : A (Host) Host Points To TTL Actions * 1.2.3.4 1 Hour CNAMES (Aliases) Host Points To TTL Actions e email.secureserver.net 1 Hour email email.secureserver.net 1 Hour ftp @ 1 Hour imap imap.secureserver.net 1 Hour mail pop.secureserver.net 1 Hour mobilemail mobilemail-v01.prod.mesa1.secureserver.net 1 Hour pda mobilemail-v01.prod.mesa1.secureserver.net 1 Hour pop pop.secureserver.net 1 Hour smtp smtp.secureserver.net 1 Hour webmail webmail.secureserver.net 1 Hour www @ 1 Hour MX (Mail Exchange) Priority Host Goes To TTL Actions 10 @ mailstore1.secureserver.net 1 Hour 0 @ smtp.secureserver.net Host Points To TTL Actions @ ns53.domaincontrol.com @ ns54.domaincontrol.com What should i correct ? Thanks for helping me Francesco

    Read the article

  • Intermittent CNAME forwarding

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. At all times, I can connect to my website through the no-ip domain without issue. For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of today, however, the GoDaddy domain only works for about 10 minutes at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have attempted to run tests using a couple of online DNS check websites, but have not gotten any errors at any time. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. I would like advice on how I could debug/resolve this issue. Since the problem appeared without me changing anything on my end, I hope it will resolve itself, but knowing the cause in case it happens again would be preferable. EDIT: I changed the configuration in GoDaddy to create an A (Host) that points at my current IP. This works fine, so I can access the site through the GoDaddy domain without the preceding www. I am currently waiting for a new CNAME record to propagate that points the www subdomain at the main host, rather than my no-ip domain.

    Read the article

  • Can you share offline files cache with two user accounts?

    - by Joel Coehoorn
    I have a new laptop that I use for both home and work. It runs windows 7 ultimate, and is joined to the domain at work. It is okay to use this laptop for both work and personal activities, and I even have an account set up on the local machine in addition to the work domain account specifically for this to help keep the two separate. At home, I have a file server that I use to share files and printers with my wife's laptop, this new laptop, and my old desktop which will now become the family machine. My mp3 library is on there, among other things. What I want to do is use the windows Offline Files feature to keep a synced copy of my music library on the laptop. That part is easy. What's tricky is that I want to share this offline cache between both the local account on the laptop and my work domain account. I could do them both separately, but then I have two copies of a very large music library stored locally. This also means twice the sync burden, when the domain account is rarely connected to the file share. I really want to be able to sync from the local machine account only, and have the domain account be able to use the synced files. I know where the offline file cache is kept (\Windows\CSC) and I can find the cached files (not encrypted), but permissions on the cache are setup weird, and so using that cache directly is not trivial. Any ideas appreciated.

    Read the article

  • Top Reasons You Need A User Engagement Platform

    - by Michael Snow
    Guest post by: Amit Sircar, Senior Sales Consultant, Oracle Deliver complex enterprise functionality through a simple intuitive and unified User Interface (UI) The modern enterprise contains a wide range of applications that are used to manage the business and drive competitive advantages. Organizations respond by creating a complex structure that results in a functional and management grouping of users. Each of these groups of users requires access to multiple applications and information sources in order to perform their job functions. This leads to the lack of a unified view of enterprise information, inconsistent user interfaces and disjointed security. To be effective, portals must be designed from the end-user perspective, enabling the user to accomplish as many tasks as possible while visiting the fewest number of portals. This requires rethinking the way that portals are built, moving from a functional business unit perspective to a user-focused, process-oriented point of view. Oracle WebCenter provides the Common User Experience Architecture that allows organizations to seamlessly present a unified view of enterprise information tailored to a particular user’s role and preferences. This architecture provides the best practices, design patterns and delivery mechanism for myriad services, applications, and data sources.  In order to serve as a primary system of access, Oracle WebCenter also provides access to unstructured content and to other users via integrated search, service-oriented artifacts, content management, and collaboration tools. Provide a modern and engaging experience without modifying the core business application Web 2.0 technologies such as blogs, wikis, forums or social media sites are having a profound impact in the public internet.  These technologies can be leveraged by enterprises to add significant value to the business. Organizations need to integrate these technologies directly into their business applications while continuing to meet their security and governance needs. To deliver richer connections and become a more agile and intelligent business, WebCenter provides an enterprise portal platform that contains pre-integrated, standards-based Enterprise 2.0 services. These Enterprise 2.0 services can be easily accessed, integrated and utilized by users. By giving users the ability to use and integrate Enterprise 2.0 services such as tags, links, wikis, activities, blogs or social networking directly with their portals and applications, they are empowered to make richer connections, optimize their productivity, and ultimately increase the value of their applications. Foster a collaborative experience The organizational workplace has undergone a major change in the last decade. With increasing globalization and a distributed workforce, project teams may be physically separated by large distances. Online collaboration technologies are becoming a critical resource to enable virtual teams to share information and work together effectively. Oracle WebCenter delivers dynamic business communities with rich Services to empower teams to quickly and efficiently manage their information, applications, projects, and people without requiring IT assistance. It brings together the latest technology around Enterprise 2.0 and social computing, communities, personal productivity, and ad-hoc team interactions without any development effort. It enables the sharing and collaboration on team content, focusing an organization’s valuable resources on solving business problems, tapping into new ideas, and reducing time-to-market. Mobile Support The traditional workplace dynamics that required employees to access their work applications from their desktops have undergone a fundamental shift. Employees were used to primarily working from company offices and utilized an IT-issued computer for performing their job functions. With the introduction of flexible work hours and the growth of remote workers, more and more employees need the ability to remain productive even when they do not have access to a computer via the use of tablets and smartphones.  In addition, customers and citizens have come to expect 24x7 access to resources and websites from wherever they are located. Tablets and smartphones have empowered everyone to quickly access services they need anytime and from any place.  WebCenter provides out of the box capabilities to deliver the mobile experience in a seamless manner. Seeded device profiles and toolkits within WebCenter can be used to render the same web pages into multiple target devices such iPads, iPhones and android devices. Web designers can preview the portal using the built in simulator, make necessary updates and then deploy their UI design for the targeted device. Conclusion The competitive economy and resource constraints facing organizations today require them to find ways to make their applications, portals and Web sites more agile and intelligent and their knowledge workers more productive no matter where they are located. Organizations need to provide faster access to relevant information and resources, enhance existing applications and business processes with rich Enterprise 2.0 services, and seamlessly deliver content to mobile platforms. Oracle WebCenter successfully meets these challenges by providing the modern user experience platform for the enterprise and the Web.

    Read the article

  • iOrgSoft Video Converter for Mac

    - by terryhao
    [url=http://www.iorgsoft.com/Video-Converter-for-Mac/]video converter for mac[/url] IOrgSoft[url=http://www.iorgsoft.com/Video-Converter-for-Mac/]video converter for mac[/url] is an excellent video converting and editing software for Macintosh users. A built-in powerful video player, trimming, splitter/joiner/merger tools give you everything you need to manage your videos on mac. This mac converter supports many video formats like AVI, MP4, WMV, MPEG-1,2, YouTube(FLV), Limewire, Realplayer(RM,RMVB), Quicktime(MOV), MKV, MOD, TOD, ASF, 3GP, 3G2, AVCHD/M2TS/MTS/TS/TRP/TS, MXF, etc. Video Converter for Mac features a very clean user interface which makes this task a breeze. You can trim/clip any segments and optionally merge/join and sort them to create your personal movie, crop frame size to remove any unwanted area in the frame just like a pair of smart scissors and set the output video parameters such as video resolution, video frame rate, audio codec, video codec and video quality. Converted videos can be imported into imovie/itunes/FCE/FCP/QuickTime Pro or played on iPad, iPod touch, iPod classic, iPod nano, iPhone, iPhone 3GS, Apple TV, PSP, PS3, Creative Zen, iRiver PMP, Archos, mobile phones and other MP4/MP3 players. Video Converter for Mac makes video conversion easy. Free download now and have a try for yourself! [url=http://www.iorgsoft.com/Video-Editor-for-Mac/]Video Editor for Mac[/url] [url=http://www.iorgsoft.com/Mod-Converter/]mod converter[/url] [url=http://www.iorgsoft.com/Mod-Converter-for-Mac/]mod converter for mac[/url]

    Read the article

  • Windows XP mounting USB drive to same letter as previously mapped network drive

    - by GAThrawn
    Why does Windows always mount a USB drive as the next drive letter after the last physical drive, even when that letter is already taken by a mapped drive, and is there any way to improve this behaviour? What happens is I tend to use a few different flash drives on my PC, as well as having both a Blackberry and a personal phone that mount as USB drives when I plug them in to charge. Being on a corporate PC I also have a number of mapped network drives (some set by login script, some set as persistent mappings in my profile). When I first login I'll have drive letters like this: C: - Local Drive D: - DVD Drive G: - Login script mapped drive J: - Login script mapped drive When I plug the Blackberry in it'll mount two drives (one for onboard storage, one for the SD card) as E: and F:. If I then plug in another USB drive it will mount as G:, even though that's already taken by a network mapped drive. This leaves me with the following drives: C: - Local Drive D: - DVD Drive E: - USB drive (Blackberry) F: - USB drive (Blackberry) G: - Login script mapped drive [G: - USB drive - mounted but not visible in Explorer or command prompt] J: - Login script mapped drive I then have to go into Disk Management, find the new USB drive that's mounted to G: and re-assign it to another letter eg Z:, once this is done Auto-Play detects it and throws up its normal dialog, and its browseable in Explorer. While this is OK to do if you only use one or two USB drives and have admin access to your PC with your login account, its a total pain in the proverbial if you regularly use a whole load of different USB devices, and corporate policy means you have one account for your normal login (that only has User access to workstations), but have to use a different account for any privileged action. I realize that one possible reason for this is the difference between hardware which is mounted and assigned drive letters at the systen level, and mapped drives which are done at the user level. For USB devices that are already plugged in before login, then obviously they're mounted before Windows knows what network drives may be mapped. However if you plug the USB devices in after you're fully logged in and have drives mapped then Windows must know which letters are available?

    Read the article

  • Cannot send email outside of network using Postfix

    - by infmz
    I've set up an Ubuntu server with Request Tracker following this guide (the section about inbound mail would be relevant). However, while I'm able to send mail to other users within the network/domain, I cannot seem to reach beyond - such as my personal accounts etc. Now I have no idea what is causing this, I thought that all it takes is for the system to fetch mail through our exchange server and be able to deliver in the same way. However, that hasn't been the case. I have found another server setup in a similar fashion (CentOS 5, Request Tracker but using Sendmail), however it is a dated server and whoever's built it has kindly left no documentation on how it works, making it a pain to use that as a reference system! :) At one point, I was told I need to set up a relay between the local server's email add and our AD server but this didn't seem to work. Sorry, I know next to nothing about mailservers, my colleagues nothing about Linux so it's a hard one for me. Thank you! EDIT: Result of postconf -N with details masked =) alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = myhost.mydomain.com, localhost.mydomain.com, , localhost myhostname = myhost.mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = EXCHANGE IP smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Sample log message: Sep 4 12:32:05 theedgesupport postfix/smtp[9152]: 2147B200B99: to=<[email protected]>, relay= RELAY IP :25, delay=0.1, delays=0.05/0/0/0.04, dsn=5.7.1, status=bounced (host HOST IP said: 550 5.7.1 Unable to relay for [email protected] (in reply to RCPT TO command))

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

  • Relax Linux - it's just me! (filesystem permissions)

    - by Xeoncross
    One of my favorite things about Linux is also the most annoying - file system permissions. In production machines and web servers I love how everything is so secure and locked down - but on development machines it really slows me down. I'll give one example out of the many that I discover weekly. Like most people, I dual-boot Ubuntu and Windows so I can continue using the Adobe CS4 suite. I often design web themes and other things while I'm still using windows. Later I'll boot into Ubuntu to take the themes and write the backend PHP for them. After mounting the windows C: drive partition I can copy the template files over so I can begin editing them. However, thanks to Linux desire to protect me I find that after coping the files I end up with a totally locked set of files where even I don't have read-write permissions. So after carful consideration about the tremendous risks that the HTML files pose to me - I chmod them so that I and apache can begin using them. Now given, the chmod process isn't that hard - but after you chmod enough files per day you get sick of doing it. I'm constantly creating, fetch, editing, and removing files from my user, git repos, php, or other random processes. This is a personal development machine after all. Everything changes on a day by day basis. So my question is, how can I get linux to relax about what I'm doing with my HTML/JS/PHP/TXT/SQL/etc. files so that I can work faster without constantly stopping to chmod things? I pinky-promise I won't hack into my account with an HTML file. ;)

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >