Search Results

Search found 1722 results on 69 pages for 'andrew sullivan'.

Page 2/69 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • SOA &amp; E2.0 Partner Community Forum XIII registration is open

    - by Jürgen Kress
    INVITATION TO THE ORACLE SOA AND E2.0 PARTNER COMMUNITY FORUM Do you want to learn about how to sell the value of Fusion Middleware by combining SOA and E2.0 Solutions? We would like to invite you to become updated and trained at our SOA and E2.0 Partner Community Forum March on 15th and 16th 2011 in Utrecht, The Netherlands. Keynote: Andrew Sutherland and Andrew Gilboy The Oracle SOA and E2.0 Partner Community Forum is a wonderful opportunity to: learn how to sell the value of Fusion Middleware bij combining SOA and E2.0 solutions meet with Oracle SOA and E2.0 Product management exchange knowledge learn from successful SOA, BPM, WebCenter and UCM implementations understand Oracle's Fusion Applications Strategy network within the Oracle SOA Partner Community and the Oracle E2.0 Partner Community During this highly informative event you can learn about partner success stories, participate in an array of break out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Additionally to the SOA and E2.0 Partner Community Forum, you can participate in technical hands on workshops on March 17th and 18th. The goal of these workshops is to prepare you for customer implementations. Places are limited, so don't delay and register now by clicking here. Registration takes a few minutes and is free of charge, except in case of cancellation or no show (cancellation fee € 150). For more information, please visit our website. Best regards Jürgen Kress & Hans Blaas SOA & E2.0 Partner Adoption EMEA Agenda March 15th 2011 Welcome & Introduction Keynote Oracle Middleware Strategy and information on Application Grid and Exalogic Andrew Sutherland, SVP Middleware Sales EMEA, Oracle Keynote Managing Online Customer, Partner and Employee Engagement with Oracle E2.0 Solutions Andrew Gilboy, VP E2.0 Sales EMEA, Oracle Partner SOA/BPM Reference Case Partner WebCenter/UCM Reference Case SOA Suite PS3 David Shaffer, VP Product Management, Oracle Why Specialization is important for Partners Nick Kritikos, Hans Blaas & Jürgen Kress, Alliances & Channels, Oracle   Agenda March 16th 2011 Welcome & Introduction Day II Breakout round 1 - SOA Suite 11g PS3 & OSB - Importance of ADF & JDeveloper - SOA Security IDM - WebCenter PS3, Whats new - E2.0 Sales Plays Breakout round 2 - WebCenter PS3, Whats new - Application Management Enterprise manager and Amberpoint - ADF/WebCenter 11g integration with BPM Suite 11g - Importance of ADF & JDeveloper - JCAPS & OC4J migration opportunities for service business Breakout round 3 - BPM 11g: Whats new - Universal Content management 11g - SOA Security Management - E2.0 Surrounding Products: ATG, Documaker, Primavera - Middleware Industry Value Propositions & Sales Play Fusion Application SOA & E2.0 Summary & Closing For registration and additional information, please visit our website. For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Community,SOA,SOA Partner Community Forum,SOA Community Forum,OPN,Jürgen Kress

    Read the article

  • bash command for each file in a folder

    - by Robert
    I have a set of files on which I would like to apply the same command and the output should contain the same name as the processed file but with a different extension. Currently I am doing rename /my/data/Andrew.doc to /my/data/Andrew.txt I would like to do this for all the .doc files from the /my/data/ folder and to preserve the name. I tried several versions but I guess I have something wrong in the syntax as I an new to linux.

    Read the article

  • Ethernet switch capacity question

    - by Andrew Queisser
    We're looking at hooking up 48 small embedded systems with 10/100 Ethernet ports to an Ethernet switch and then have that switch talk to a server upstream via a faster connection. I have a couple of questions about that scenario: What kind of upstream connection is best (fiber, other?) Would it be reasonable to download 1GB/hour from each of the 48 systems concurrently? We'd be using some kind of TCP based protocol of our own design. Thanks, Andrew

    Read the article

  • Ubuntu and Belkin N150 f6d4050 Wireless USB adapter v2

    - by Andrew
    I'm new to Ubuntu, and I'm trying to get my Belkin USB adapter to work. There are plenty of discussions out there already about this, but none really helped me out. Here's what I've done - Installed ndiswrapper Installed ndisgtk Installed the driver (rt2870.inf) via ndisgtk ndisgtk reported that the driver was installed and the hardware was present. The green light on the adapter is solid green, which I assume means that Ubuntu is aware of it's presence. However, when I click the little wireless symbol at the navigation bar, there's no option to choose my adapter (assuming that it's supposed to show up there...) My adapter version is F6D4050 - Where do I go from here? I'm a Ubuntu newb, so speak slowly. :P lsusb - andrew@ubuntu:~$ lsusb Bus 002 Device 003: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser Bus 002 Device 002: ID 04f9:0229 Brother Industries, Ltd Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 050d:935b Belkin Components Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsmod - andrew@ubuntu:~$ lsmod Module Size Used by binfmt_misc 7960 1 fbcon 39270 71 tileblit 2487 1 fbcon font 8053 1 fbcon bitblit 5811 1 fbcon softcursor 1565 1 bitblit vga16fb 12757 0 vgastate 9857 1 vga16fb snd_cmipci 37557 2 snd_intel8x0 31155 2 snd_ac97_codec 125394 1 snd_intel8x0 ac97_bus 1450 1 snd_ac97_codec snd_mpu401 6875 0 snd_pcm_oss 41394 0 snd_mixer_oss 16299 1 snd_pcm_oss snd_pcm 87882 4 snd_cmipci,snd_intel8x0,snd_ac97_codec,snd_pcm_oss snd_opl3_lib 10846 1 snd_cmipci snd_hwdep 6924 1 snd_opl3_lib snd_mpu401_uart 6857 2 snd_cmipci,snd_mpu401 snd_seq_dummy 1782 0 snd_seq_oss 31219 0 snd_seq_midi 5829 0 snd_rawmidi 23420 2 snd_mpu401_uart,snd_seq_midi snd_seq_midi_event 7267 2 snd_seq_oss,snd_seq_midi snd_seq 57481 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event nouveau 515227 2 ttm 60847 1 nouveau snd_timer 23649 3 snd_pcm,snd_opl3_lib,snd_seq snd_seq_device 6888 6 snd_opl3_lib,snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq ns558 3704 0 ppdev 6375 0 drm_kms_helper 30742 1 nouveau joydev 11072 0 ndiswrapper 244768 0 gameport 10966 3 snd_cmipci,ns558 usblp 12407 0 asus_atk0110 10033 0 parport_pc 29958 1 serio_raw 4918 0 drm 199204 4 nouveau,ttm,drm_kms_helper i2c_algo_bit 6024 1 nouveau edac_core 45423 0 edac_mce_amd 9278 0 k8temp 3912 0 snd 71106 23 snd_cmipci,snd_intel8x0,snd_ac97_codec,snd_mpu401,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_opl3_lib,snd_hwdep,snd_mpu401_u art,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 8052 1 snd snd_page_alloc 8500 2 snd_intel8x0,snd_pcm i2c_nforce2 6099 0 lp 9336 0 parport 37160 3 ppdev,parport_pc,lp hid_logitech 8820 0 ff_memless 5109 1 hid_logitech ohci1394 30260 0 usbhid 41084 1 hid_logitech hid 83440 2 hid_logitech,usbhid usb_storage 49833 0 skge 41049 0 ieee1394 94771 1 ohci1394 sata_sil 8895 0 forcedeth 55592 0 sata_nv 23778 1 pata_amd 11962 1 floppy 63156 0

    Read the article

  • Where does $PATH get set in OS X 10.6 Snow Leopard?

    - by Andrew
    I type echo $PATH on the command line and get /opt/local/bin:/opt/local/sbin:/Users/andrew/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin I'm wondering where this is getting set since my .bash_login file is empty. I'm particularly concerned that, after installing MacPorts, it installed a bunch of junk in /opt. I don't think that directory even exists in a normal Mac OS X install. Update: Thanks to jtimberman for correcting my echo $PATH statement

    Read the article

  • Podcast Show Notes: The Big Deal About Big Data

    - by Bob Rhubart
    This week the OTN ArchBeat kicks off a three-part series that looks at Big Data: what it is, its affect on enterprise IT, and what architects need to do to stay ahead of the big data curve. My guests for this conversation are Jean-Pierre Dijks and Andrew Bond . Jean-Pierre, based at Oracle HQ in Redwood Shores, CA, is product manager for Oracle Big Data Appliance and Oracle's big data strategy. Andrew Bond  is Head of Transformation Architecture for Oracle, where he specialzes in Data Warehousing, Business Intelligence, and Big Data. Andrew is based in the UK, but for this conversation he dialed in from a car somewhere on the streets of Amsterdam. Listen to Part 1What is Big Data, really, and why does it matter? Listen to Part 2 (Oct 10)What new challenges does Big Data present for Architects? What do architects need to do to prepare themselves and their environments? Listen to Part 3 (Oct 17)Who is driving the adoption of Big Data strategies in organizations, and why? Additional Resources http://blogs.oracle.com/datawarehousing http://www.facebook.com/pages/OracleBigData https://twitter.com/#!/OracleBigData Coming Soon A conversation about how the rapidly evolving enterprise IT landscape is transforming the roles, responsibilities, and skill requirements for architects and developers. Stay tuned: RSS

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO) – Your pre-build & tested middleware platform

    - by JuergenKress
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected] SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: AFPO,Accenture,middleware platform,oracle middleware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO)

    - by Lionel Dubreuil
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure  Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected]

    Read the article

  • Managed Cloud Services Wins Another Prestigious Industry Award

    - by Dori DiMassimo-Oracle
    Over the last 90 days, Oracle Managed Cloud Services has been the proud recipient of TWO prestigious industry awards for service excellence and customer value leadership.  The most recent award is last month's 2014 Frost & Sullivan Best Practice Award - North America Managed Cloud Customer Value Leadership Award, which rated Oracle Managed Cloud Services as the clear leader versus other providers; Managed Cloud received an "exceptional" rating in 9 of 10 evaluation categories.  The research report  is an excellent look at our industry and what is valued by cloud customers looking for a managed solution.   In April, Managed Cloud was a repeat winner of the Outsourcing Excellence Award - 2014 Outsourcing Excellence Award - Best ITO Infrastructure (Sony Computer Entertainment America).  Last year we won the award for Best Cloud: 2013 Outsourcing Excellence Award - Best Cloud (Take-Two Interactive)  These awards are a great testimony of the transformation of Managed Cloud Services to a true Cloud-based business and a strategic and relevant part of the Oracle Cloud Solutions portfolio.  Frost & Sullivan, in particular, recognizes our vision and our capability of successfully managing business transactions in the cloud.

    Read the article

  • Django inlineformset validation and delete

    - by Andrew Gee
    Hi, Can someone tell me if a form in an inlineformset should go through validation if the DELETE field is checked. I have a form that uses an inlineformset and when I check the DELETE box it fails because the required fields are blank. If I put data in the fields it will pass validation and then be deleted. Is that how it is supposed to work, I would have thought that if it is marked for delete it would bypass the validation for that form. Regards Andrew Follow up - but I would still appreciate some others opinions/help What I have figured out is that for validation to work the a formset form must either be empty or complete(valid) otherwise it will have errors when it is created and will not be deleted. As I have a couple of hidden fields in my formset forms and they are pre-populated when the page loads via javascript the form fails validation on the other required fields which might still be blank. The way I have gotten around this by adding in a check in the add_fields that tests if the DELETE input is True and if it is it makes all fields on the form not required, which means it passes validation and will then delete. def add_fields(self, form, index) #add other fields that are required.... deleteValue = form.fields['DELETE'].widget.value_from datadict(form.data, form.files, form.add_prefix('DELETE')) if bool(deleteValue) or deleteValue == '': for name, field in form.fields.items(): form.fields[name].required= False This seems to be an odd way to do things but I cannot figure out another way. Is there a simpler way that I am missing? I have also noticed that when I add the new form to my page and check the Delete box, there is no value passed back via the request, however an existing form (one loaded from the database) has a value of on when the Delete box is checked. If the box is not checked then the input is not in the request at all. Thanks Andrew

    Read the article

  • Can't Access TFS 2010 Beta 2 from Visual Studio 2010 Beta 2 when domain joined

    - by Brian Sullivan
    I'm experimenting with an installation of TFS 2010 Beta 2 on a virtual machine under VirtualBox running Windows Server 2008. When I've got the server in a workgroup, I can connect to it from Visual Studio just fine, as long as I provide credentials for a local user on the server machine when prompted by the "Connect to Team Foundation Server" dialog. The desktop I'm running Visual Studio on is joined to a domain. However, when I join the server to the domain, I can no longer connect to it from Visual Studio. I get a pretty generic error message: "TF31002 - Unable to connect to team foundation server". It gives me several different possible problems, including an incorrect address or an incorrect username and password. I've already added the domain Windows identity with which I'm logged on the the desktop to the TFS Admins group on the server, so I don't think it's a username/password problem. I've also tried putting the literal IP address of the server in the dialog address box instead of the machine name, but still no dice. I made sure that network discovery was enabled on the server, too, and can navigate to "\\webserver2008" in Windows Explorer without any problems. Shouldn't be a firewall problem, since the TFS install creates the appropriate exceptions in Windows Firewall. It's all a bit confusing, since it seems to work when the server is in a workgroup. Note: I'm a dev, not an admin, so there are many subtleties of server administration with which I'm not familiar. Please make no assumptions about what I may or may not have tried; what may be obvious to you may have never occurred to me. Thanks in advance!

    Read the article

  • Windows update error: Code 80072F8F (possibly datetime-not-correct, but it is)

    - by Andrew
    I have a Windows 2008 Server 64bit installation running as a virtual instance with a hosting provider. Windows Update has worked fine until IE8 (along with some other updates) managed to get installed (don't get me started). Now all of a sudden Windows Update fails to run and complains with error 80072F8F. UPDATE: I've since removed IE8 and am still having issues (tissues are on order) This apparently means that the time/timezone of the server is incorrect - which is not the case. I've synced the time with a time server and rebooted a number of times. I've followed the instructions here (http://support.microsoft.com/kb/929458) to no avail. Thanks! Andrew

    Read the article

  • Intel Core i5-2467m - Turbo Boost not activating?

    - by Trevor Sullivan
    I have a Samsung Series 5 laptop with an Intel Core i5-2467m process @ 1.6Ghz. The processor supports Intel Turbo Boost up to 2.30 Ghz according to the specifications. The i5-2467m is a dual-core process with HyperThreading, so there is a total of four (4) virtual cores in Windows 7 SP1. http://ark.intel.com/products/56858/ I've installed the Intel Turbo Boost Technology Monitor v2.6 to monitor if Turbo Boost is enabled, and set it to "Always On Top." I followed this process to max out the CPU: Open (4x) PowerShell instances Set each instance's affinity to a distinct CPU vCore Ran this code in each instance: while (1 -eq 1) { } Unfortunately, after maxing out all 4 cores, my laptop got hot, but Turbo Boost never kicked in. Any ideas on how to ensure that I'm getting the 2.3Ghz Turbo Boost capability of my laptop?

    Read the article

  • script to recursively check for and select dependencies

    - by rp.sullivan
    I have written a script that does this but it is one of my first scripts ever so i am sure there is a better way:) Let me know how you would go about doing this. I'm looking for a simple yet efficient way to do this. Here is some important background info: ( It might be a little confusing but hopefully by the end it will make sense. ) 1) This image shows the structure/location of the relevant dirs and files. 2) The packages.file located at ./config/default/config/packages is a space delimited file. field5 is the "package name" which i will call $a for explanations sake. field4 is the name of the dir containing the $a.dir i will call $b field1 shows if the package is selected or not, "X"(capital x) for selected and "O"(capital o as in orange) for not selected. Here is an example of what the packages.file might contain: ... X ---3------ 104.800 database gdbm 1.8.3 / base/library CROSS 0 O -1---5---- 105.000 base libiconv 1.13.1 / base/tool CROSS 0 X 01---5---- 105.000 base pkgconfig 0.25 / base/tool CROSS 0 X -1-3------ 105.000 base texinfo 4.13a / base/tool CROSS DIETLIBC 0 O -----5---- 105.000 develop duma 2_5_15 / base/development CROSS NOPARALLEL 0 O -----5---- 105.000 develop electricfence 2_4_13 / base/development CROSS 0 O -----5---- 105.000 develop gnupth 2.0.7 / extra/development CROSS NOPARALLEL FPIC-QUIRK 0 ... 3) For almost every package listed in the "packages.file" there is a corresponding ".cache file" The .cache file for package $a would be located at ./package/$b/$a/$a.cache The .cache files contain a list of dependencies for that particular package. Here is an example of one of the .cache files might look like. Note that the dependencies are field2 of lines containing "[DEP]" These dependencies are all names of packages in the "package.file" [TIMESTAMP] 1134178701 Sat Dec 10 02:38:21 2005 [BUILDTIME] 295 (9) [SIZE] 11.64 MB, 191 files [DEP] 00-dirtree [DEP] bash [DEP] binutils [DEP] bzip2 [DEP] cf [DEP] coreutils ... So with all that in mind... I'm looking for a shell script that: From within the "main dir" Looks at the ./config/default/config/packages file and finds the "selected" packages and reads the corresponding .cache Then compiles a list of dependencies that excludes the already selected packages Then selects the dependencies (by changing field1 to X) in the ./config/default/config/packages file and repeats until all the dependencies are met Note: The script will ultimately end up in the "scripts dir" and be called from the "main dir". If this is not clear let me know what need clarification. For those interested I'm playing around with T2 SDE. If you are into playing around with linux it might be worth taking a look.

    Read the article

  • Ideas for scaling out database architecture

    - by andrew
    We're looking to scale out our existing database architecture and need some advice on which way to go. We currently have 2 web servers behind a load balancer that both read & write to a single master database which replicates to a slave. Ideally, I'd like each of the webservers to point to their own master DB and have the data between the 2 synchronised but from what I've read, using any kind of master-master or ring-replication is discouraged. I'm looking for a general "what do other people do" kind of answer - database vendor isn't a concern at the moment but we'd like to stay with MySQL or convert to MSSQL. Any ideas would be gratefully received. Many thanks, Andrew

    Read the article

  • Domains propagation issues.

    - by Andrew
    Hello to all. I got very strange issue, really weird. On weekend, May 9th I changed my server location from US to UK. Of course, everything works correctly excluding domains. There's something wrong. I got few domains on this server but I still cannot access them. When I try from the other location it works correctly. The most funny situation is that everything is working correctly from my girlfriend's work, about 500 meters from our house, but they have another ISP. It also works when I access the domains via proxy server. I checked who.is informations and everything seems to be working. On Sunday and today morning I was able to access my domains but only for a while. When I refreshed website second time I got error "Firefox was not able to connect server". Since then I'm still getting this error. Could it be my ISP fault? Regards, Andrew

    Read the article

  • Logging in with a different password than the database password, PHPMyAdmin

    - by Andrew M
    I am trying to install PHPMyAdmin on my server to manage my MySQL databases. Right now I have only one I want to add, but I would like to be able to manage multiple databases from the same account on PHPMyAdmin. How would I configure PMA so I could login with "andrew" and a password of "examplepassword" instead of the annoyingly long and unchangeable database user and password I am provided (ie. db3483478234, password of random characters)? I can't seem to find an area to specify a different password than the regular database username and password.

    Read the article

  • Outlook 2010 Error

    - by Trevor Sullivan
    I'm running Outlook 2010 SP1 on Windows 7 x64 SP1, and I'm getting an error message saying "Your Microsoft Exchange administrator has blocked the version of Outlook that you are using. Contact your administrator for assistance." I'm still able to log into my account using Outlook Web Access (OWA), so I know that my account is working just fine. Outlook 2010 with Service Pack 1 is the standard for Windows 7 client systems at this organization, and other people are able to access their e-mail just fine. When my account was initially configured, I was able to use Outlook for a couple of days, and then it suddenly stopped working, providing only the above error message. Do you have any ideas on what I should look into to resolve this problem? Is there any information I can obtain on the client side that will help the Exchange folks investigate the issue further? Is there any verbose logging I can enable, or diagnostic logging in Outlook? Cheers

    Read the article

  • Terminal Server/Win2K3: Users can't write to their My Documents or Temp folders

    - by Tim Sullivan
    I have a situation where a bunch of users are running our software on a Terminal Services machine on Windows 2003 Server. I've removed most permissions from the User group, but made sure they all have the appropriate permissions for their own application folders, as well as for their Documents and Settings folder. For some reason, even though everything seems to be set up properly, users can't create or delete files from their My Documents, Temp or other D&S folders. Whatever could be going on? I thought this was going to be straightforward, but clearly it's not! :-) Thanks for any help!

    Read the article

  • DNS subdomain problem - Hover.com

    - by Ryan Sullivan
    I use hover.com to manage my domain names. I have having a huge problem with setting a sub-domain to a specific IP address: I want the sub-domain on a particular domain name that I have. I set an A type record for that sub-domain and pointed it towards the IP address; it is not working at all. The thing that is confusing me is that when I set the IP address to a sub-domain on a different domain name it works just fine. Also, I have since deleted the DNS record from the domain that it happened to work on, and when I type that address into a browser it still resolves to the IP I had it set to. I am not sure what is going on at all. If this seems confusing I am sorry, but I am very confused about the whole thing myself. If any clarification is needed, just ask and I will try to clear things up.

    Read the article

  • Why can't I create a Windows backup on my secondary disk?

    - by Brian Sullivan
    I've installed Windows 7 Ultimate on an SSD that I've added to the XPS desktop that I bought from Dell. I would like to use the built-in backup functionality to create incremental backups and store them on the large drive that came with the machine. I formatted the large drive and turned it into a Basic disk. However, when I try to set the backup location to the large internal disk (E:\) in the "Set up backup" wizard, a get a message saying, "A system image cannot be saved on a drive that your computer boots from or that Windows is installed on." Windows is not installed on that disk. I even deleted the OEM partition that was on the disk, and removed it completely from the boot order in the BIOS. Any clue why Windows is griping at me about this?

    Read the article

  • Windows 7 [virtualized] resolutions in Macbook Pro Retina

    - by Trevor Sullivan
    So, I was considering picking up a Macbook Pro Retina, but then I realized that Apple forces you to scale the resolution, so you don't actually see the true benefits of the 2880x1800 display. Instead, you see upscaled, pixelated icons -- I saw this for myself in an Apple store a couple days ago. That's ok though, because the main reason I'd purchase one is to run Windows 7 on it, however I understand that the bootcamp drivers have not been updated to work with the MBP Retina. Instead, the option would be to run Windows 7 virtualized, but I haven't found any conclusive evidence to indicate whether the entire 2880x1800 resolution would act the same virtualize (VMware Fusion, VirtualBox, Parallels) as running Windows 7 natively. My question is: Does Windows 7 see the entire 2880x1800 virtualized, same as running it on bare metal (boot camp)?

    Read the article

  • Enabling Bitlocker in Native VHD Boot

    - by Trevor Sullivan
    I have a laptop with a single hard drive, using the GUID Partition Table (GPT) disk layout, with the following partitions: 120MB EFI System Partition 300MB Microsoft Reserved Partition (MSR) Remainder - GPT primary partition I have a Windows 8 Professional VHD configured as a native-boot VHD on the GPT primary partition. Can I use Bitlocker to encrypt my main partition, or to encrypt the VHD volume?

    Read the article

  • Debugging iFilter plug-in (PDF indexing)

    - by Trevor Sullivan
    I have the official Adobe x64 iFilter PDF plug-in and the FoxIt Software iFilter PDF plug-in installed, and neither one seems to be allowing me to index the contents of PDF files. So far, I've: Added my data folder into the Indexing service configuration Ensured that PDF files are configured to index "file properties and contents" Rebuilt the index from scratch But, when I search, I can only search for PDF file names, not the contents of them. Any ideas on how to debug this issue?

    Read the article

  • WINDOWS: Your computer hangs. You can windows + R (run dialog) but performance is so halted taskMGR

    - by John Sullivan
    The question is, what process are available to try to recover from total system instability before pulling the plug when we can do nothing but programs or batches in the path from the run dialog (windows + r key), and performance is so dead that taskMGR / procEXP / other programs with visual guis are not usable? I am not a windows expert, but ideally someone out there has written a program that does more or less stuff like this: Immediately set (or perhaps I can set from the run prompt) its priority to extremely high, evaluate performance bottlenecks. E.g. is CPU 100%? If so identify offending program(s) or problems. Attempt / log fixes, then provide crude feedback asking the user if his performance has stabilized enough to abort, wait a few seconds, if no feedback continue, etc. etc. Eventually try to do any "system cleanup" if the program decides it cannot recover and perhaps finally provide a series of beeps to the user, or what have you, to say "OK, I give up, time to pull the plug". Ideally create a log, when able. These kinds of horrible hangs are a situation where surely trying something, anything, is better than nothing -- as long as that something is intelligent -- when the alternative is ripping out the power coord. Again, I am not a windows expert, so perhaps there is a much more elegant "hands on" approach I am not aware of.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >