Search Results

Search found 803 results on 33 pages for 'greg weinman'.

Page 8/33 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • November EPM Patch Set Updates released

    - by p.anda
    (in via Greg) Greg has provided us an updated listing of current patches for the EPM system.  These follow on from our previous Blog post last month in October [link]. 17320505 - Oracle Hyperion Reporting and Analysis for Foundation - PSU 11.1.2.1.136 17413112 - Oracle Hyperion Planning, Fusion Edition - PSU 11.1.2.2.30516345450 - Oracle Hyperion Reporting and Analysis for Financial Reporting - PSU 11.1.2.1.134 17609530 - Hyperion Essbase RTC - PSU 11.1.2.3.00317609535 - Hyperion Essbase Server - PSU 11.1.2.3.00317609533 - Hyperion Essbase Client - PSU 11.1.2.3.00317609539 - Hyperion Essbase Client MSI - PSU 11.1.2.3.00317609518 - Hyperion Essbase Administration Services Server - PSU 11.1.2.3.00317609497 - Hyperion Essbase Administration Services Console MSI - PSU 11.1.2.3.00317609493 - Hyperion Analytic Provider Services - PSU 11.1.2.3.00316692973 - Oracle Hyperion Enterprise Performance Management - PSU 11.1.2.2.30116984944 - Oracle Hyperion Financial Close Management - PSU 11.1.2.2.35216989110 - Oracle Hyperion Financial Close Management - PSU 11.1.2.3.10017636270 - Hyperion Strategic Finance - PSU 11.1.2.1.103 Be sure to review the related Readme files available per Patch Set Update.  These describe the defects fixed and/or updates included along with requirements and instructions for applying the patch. To access simply click on the "Read Me" button when accessing the PSU via My Oracle Support | Patches & Updates. At any time to see listing of the latest Enterprise Performance Management (EPM) Patch Sets and Patch Set Updates for the current releases visit: Doc ID 1400559.1 - Available Patch Sets and Patch Set Updates for Oracle Hyperion EPM Products Doc ID 1525518.1 - Available Patch Sets and Patch Set Updates for Oracle Crystal Ball, DRM, FCM, HPCM and HSF For OBIEE keep up to-date with the latest Patches and Patch Set Updates by visiting: Doc ID 1488475.1 - OBIEE 11g: Required and Recommended Patches and Patch Sets

    Read the article

  • Looking Under the Hood of ...

    - by rickramsey
    copyright 2012 Rob Lang Fair is fair. Our last post featured a conversation with the beautiful and talented Eva Mendez, so today we're featuring something for those of you who prefer the other gender of our fair species. This dude has quite the hardware challenge ahead of him. He hasn't begun to find out what's really under that hood. Life is much easier for you and me, thanks to Jeff Wright and Suzanne Zorn. They wrote a wicked cool article about Oracle VM Server for SPARC. Here's a little bit about it... Looking Under the Hood of Networking in Oracle VM Server for x86 Oracle VM Server for SPARC lets you create logical networks out of physical Ethernet ports, bonded ports, VLAN segments, virtual MAC addresses (VNICs), and network channels. You can then assign channels (or "roles") to each logical network so that it handles the type of traffic you want it to. Greg King explains how you go about doing this, and how Oracle VM Server for SPARC implements the network infrastructure you configured. He also describes how the VM interacts with paravirtualized guest operating systems, hardware virtualized operating systems, and VLANs. Finally, he provides an example that shows you how it all looks from the VM Manager view, the logical view, and the command line view of Oracle VM Server for x86. More Resources for Oracle VM Server for x86 If you liked Greg and Suzanne's paper, you can ... Download Oracle VM Server for x86 here Find technical resources for Oracle VM Server for x86 here Now, if we could just come up with a name for this awesome product that doesn't feel like I'm talking with a mouthful of marbles ... :-) - Rick Website Newsletter Facebook Twitter

    Read the article

  • Java?????????????????????? Java Developer Workshop??!

    - by rika.tokumichi
    2011?5?19????????????????????Java Developer Workshop????????????????????Java??????????Java????????????????????????? ?????5???????????????????????????/???????Java?????????????Java SE(Standard Edition)?Java ME(Micro Edition)?Java FX???????????????????????????????????????? ??????·???????? Embedded Java???????????Greg Bollella ??????????????????????????????????·???????? Embedded Java?????????????Greg Bollella????Java SE????????????????????????????Java ME?Embedded Java????????????????????????????????????????????????????????? ????????????????????Java SE 7??????????????????????????? Java ONE?????????????????Java SE 7?Hotspot????????JRockit?2??JVM?????????HotRockit??????Java SE 7?2011?7?28??????????Java SE 8?2012???????????????????????????????????? JDK7?????????????????????JVM??????????????Project Coin????????????ClassLoader???????????????????????????(Project Lambda)?????????(Project Jigsaw)??JDK8??????????????????????? (??????????????http://openjdk.java.net/projects/jdk7/features/????????) ?????????????????????????????????????????Java??????????????????????????????Java SE for Embedded?Oracle Java ME Embedded Client????Java?????????????????????????????????? ???????Java SE?????????????????????????????????????????????????????????????????????????????????????????? ???????????Java Embedded Global Business Unit ???? ???????????????Java Embedded Global Business Unit ????????Java FX 2.0??????????????????????????????Java FX 2.0?????????????????????????????????????????? ???????????????????·??????????????????????????????????????·???????????????????????????? ?????????? Sun Middleware Globalization ??????? ???? ???????????????????????????? Sun Middleware Globalization ??????? ????????NetBeans 7.0 ??????????????????? 2011?4????????????NetBeans 7.0??????JDK7???????HTML5???????PHP????????????????????????NetBeans 7.0?JDK7?????????Java???????????????????????????????????????????????? ????????????????? ?????????????? ???????????????????OSGi????????????????????OSGi?????????????OSGi Alliance???????????????????????????????????????????????????????????? ???????????????????????????????????????????????????? ???Java?????????????????Blu-ray Java????????????????? ???????? ???????????????!Blu-ray Java???????????????????????Blu-ray Java??????API?????????????????????????????????? ????????????????????????Java+Ricoh: Create, Share, and Think as one.?????????Java???????????????????????Embedded Software Architecture?????????????????????Ricoh Developer Program???????????????????????????/?????????????Java?????????????????????????????????RICOH & Java Developer Challenge?????????? ?????????????????????????????????????????~SIAWASE~ ????????????????????????????????????????????????????????????????????????????????????? ??????????????????????Java Developer Workshop??????Java???????????????????????????????????????????????????????????????

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • Planning an Event&ndash;SPS NYC

    - by MOSSLover
    I bet some of you were wondering why I am not going to any events for the most part in June and July (aside from volunteering at SPS Chicago).  Well I basically have no life for the next 2 months.  We are approaching the 11th hour of SharePoint Saturday New York City.  The event is slated to have 350 to 400 attendees.  This is second year in a row I’ve helped run it with Jason Gallicchio.  It’s amazingly crazy how much effort this event requires versus Kansas City.  It’s literally 2x the volume of attendees, speakers, and sponsors plus don’t even get me started about volunteers.  So here is a bit of the break down… We have 30 volunteers+ that Tasha Scott from the Hampton Roads Area will be managing the day of the event to do things like timing the speakers, handing out food, making sure people don’t walk into the event that did not sign up until we get a count for fire code, registering people, watching the sharpees, watching the prizes, making sure attendees get to the right place,  opening and closing the partition in the big room, moving chairs, moving furniture, etc…Then there is Jason, Greg, and I who will be making sure that the speakers, sponsors, and everything is going smoothly in the background.  We need to make sure that everything is setup properly and in the right spot.  We also need to make sure signs are printed, schedules are created, bags are stuffed with sponsor material.  Plus we need to send out emails to sponsors reminding them to send us the right information to post on the site for charity sessions, send us boxes with material to stuff bags, and we need to make sure that Michael Lotter gets there information for invoicing.  We also need to check that Michael has invoiced everyone and who has paid, because we can’t order anything for the event without money.  Once people have paid we need to setup food orders, speaker and volunteer dinners, buy prizes, buy bags, buy speakers/volunteer/organizer shirts, etc…During this process we need all the abstracts from speakers, all the bios, pictures, shirt sizes, and other items so we can create schedules and order items.  We also need to keep track of who is attending the dinner the night before for volunteers and speakers and make sure we don’t hit capacity.  Then there is attendee tracking and making sure that we don’t hit too many attendees.  We need to make sure that attendees know where to go and what to do.  We have to make all kinds of random supply lists during this process and keep on track with a variety of lists and emails plus conference calls.  All in all it’s a lot of work and I am trying to keep track of it all the top so that we don’t duplicate anything or miss anything.  So basically all in all if you don’t see me around for another month don’t worry I do exist. Right now if you look at what I’m doing I am traveling every Monday morning and Thursday night back and forth to Washington DC from New Jersey.  Every night I am working on organizational stuff for SharePoint Saturday New York City.  Every Tuesday night we are running an event conference call.  Every weekend I am either with family or my boyfriend and cat trying hard not to touch the event.  So all my time is pretty much work, event, and family/boyfriend.  I have 0 bandwidth for anything in the community.  If you compound that with my severe allergy problems in DC and a doctor’s appointment every month plus a new med once a week I’m lucky I am still standing and walking.  So basically once July 30th hits hopefully Jason Gallicchio, Greg Hurlman, and myself will be able to breathe a little easier.  If I forget to do this thank you Greg and Jason.  Thank you Tom Daly.  Thank you Michael Lotter.  Thank you Tasha Scott.  Thank you Kevin Griffin.  Thank you all the volunteers, speakers, sponsors, and attendees who will and have made this event a success.  Hopefully, we have enough time until next year to regroup, recharge, and make the event grow bigger in a different venue.  Awesome job everyone we sole out within 3 days of registration and we still have several weeks to go.  Right now the waitlist is at 49 people with room to grow.  If you attend the event thank all these guys I mentioned above for making it possible.  It’s going to be awesome I know it but I probably won’t remember half of it due to the blur of things that we will all be taking care of the day of the event.  Catch you all in the end of July/Early August where I will attempt to post something useful and clever and possibly while wearing a fez. Technorati Tags: SPS NYC,SharePoint Saturday,SharePoint Saturday New York City

    Read the article

  • Mysql Slave stuck in "System lock"

    - by Greg
    My MySQL slave is spending a lot of time in Slave_SQL_Running_State: System lock. I can see that the system is currently I/O write bound, and that it is processing the log, although slowly. Show processlist doesn't show anything other than "Waiting for master to send event" and "System lock" when it is in this state. All my tables (other than the system tables) are InnoDB, and external locking is disabled. What is the slave doing in this state?

    Read the article

  • BSOD in a Hyper-V Guest VM on install

    - by Greg Hurlman
    I've got Windows Server 2008 R2 running as a development environment, with Hyper-V hosting a few different VMs. I've created a new VM - 4GB of RAM, 2 virtual procs, legacy network adapter, removed the SCSI interface. I'm booting to an ISO image of the Windows Server 2008 R2 DVD, for OS install. The problem is, after the "Windows is loading files" screen, but before the Windows logo animation, I get a blue screen: BAD_SYSTEM_CONFIG_INFO yada yada yada *** STOP: 0x00000074 (etc, etc) I've used this same ISO several times to install to other VMs, no issue.

    Read the article

  • Macports Apache not starting at Mac osx snow leopard boot

    - by greg
    Macports Apache2 not starting at Mac Osx snow leopard boot. I've done the launchctl load command, the symlinks point to my /opt/local//etc/LaunchDaemeons/org.macports.apache2/org.macports.apache2.plist, but it never starts. I can start it manually, works fine after that. Just wont load on startup. My server is named in my /opt/localapache2/conf/httd.conf, I had read that sometimes makes a difference. I've done the launchctl unload and load trick, al with no results. I'm out of ideas.

    Read the article

  • Configuring VirtualBox host only networking: OSX host, Ubuntu guest

    - by Greg K
    I have a Ubuntu guest configured with two interfaces, eth0 is using NAT and works fine, I can access the net. The second interface eth1 is set to host only networking and VirtualBox has created a vboxnet0 virtual adapter on the host. I've configured vboxnet0 in VirtualBox adapter settings with the following: ip 192.168.21.20 subnet 255.255.255.0 Once the VM guest is running, ifconfig on OSX has vboxnet0 setup as: vboxnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 0a:00:27:00:00:00 inet 192.168.21.20 netmask 0xffffff00 broadcast 192.168.21.255 In the guest, eth0 is set to use DHCP, I've statically assigned eth1 to 192.168.21.20 (is this a mistake?): auto eth1 iface eth1 inet static address 192.168.21.20 netmask 255.255.255.0 network 192.168.21.0 broadcast 192.168.21.255 gateway 192.168.21.1 There is no device on 192.168.21.1 - what should I set my gateway to? In the guest the routes look like so: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.21.0 * 255.255.255.0 U 0 0 0 eth1 10.0.2.0 * 255.255.255.0 U 0 0 0 eth0 default 10.0.2.2 0.0.0.0 UG 100 0 0 eth0 default 192.168.21.1 0.0.0.0 UG 100 0 0 eth1 Route table on OSX: $ netstat -nr Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 10.77.36.1 UGSc 28 0 en1 10.77.36/22 link#5 UCS 5 0 en1 10.77.39.38 127.0.0.1 UHS 1 2236 lo0 10.77.39.255 link#5 UHLWbI 1 66 en1 127 127.0.0.1 UCS 0 0 lo0 127.0.0.1 127.0.0.1 UH 1 8642 lo0 169.254 link#5 UCS 0 0 en1 192.168.21 link#7 UC 2 0 vboxnet 192.168.21.20 a:0:27:0:0:0 UHLWI 0 4 lo0 192.168.21.255 link#7 UHLWbI 2 64 vboxnet I can't SSH from the host to the guest (I used to be able to when the VM was configured with a bridged connection): $ ssh 192.168.21.20 ssh: connect to host 192.168.21.20 port 22: Connection refused What have I done wrong here? TIA

    Read the article

  • Macports Apache not starting at Mac osx snow leopard boot

    - by greg
    Macports Apache2 not starting at Mac Osx snow leopard boot. I've done the launchctl load command, the symlinks point to my /opt/local//etc/LaunchDaemeons/org.macports.apache2/org.macports.apache2.plist, but it never starts. I can start it manually, works fine after that. Just wont load on startup. My server is named in my /opt/localapache2/conf/httd.conf, I had read that sometimes makes a difference. I've done the launchctl unload and load trick, al with no results. I'm out of ideas.

    Read the article

  • 301 redirect root domain to www subdomain on godaddy windows hosting account

    - by Greg
    If I type in domain.com and www.domain.com, they both show the same website, but show different urls in the address bar. I'd like visitors and search engines that just type "domain.com" to be redirected to "www.domain.com". I'm using IIS 7 on a godaddy hosting account. How do I redirect all requests for "domain.com" to "www.domain.com"? I have the default DNS setup, "domain.com" as my "A record" and the cname "www" points to my "A record".

    Read the article

  • Trying to run chrooted suPHP with UserDir, getting 500 server error

    - by Greg Antowski
    I've managed to get suPHP working fine with UserDir (i.e. PHP files run from the /home/$username/public_html) directory, but I can't get it to work when I chroot it to the user's home directory. I've been following this guide: http://compilefailure.blogspot.co.nz/2011/09/suphp-chroot-gotchas.html And adapting it to my needs. I'm not creating vhosts, I just want PHP scripts to be jailed to the user's home directory. I've gotten to the part where you use makejail and set up a symlink. However even with the symlink set up correctly, PHP scripts won't run. This is what's shown in the Apache error log: SoftException in Application.cpp:537: Could not execute script "/home/jimmy/public_html/test.php" [error] [client 127.0.0.1] Caused by SystemException in API_Linux.cpp:444: execve() for program "/usr/bin/php-cgi" failed: No such file or directory The thing is, if I try running either of the following commands in the terminal it works without any issues: /home/jimmy/usr/bin/php-cgi /home/jimmy/public_html/test.php /usr/bin/php-cgi /home/jimmy/public_html/test.php I've been trying for hours to get this going and documentation for this kind of stuff is almost non-existent. If anyone could help me out with this, I'd be extremely grateful.

    Read the article

  • Glusterfs denied mount

    - by greg
    I'm using GlusterFS 3.3.2. Two servers, a brick on each one. The Volume is "ARCHIVE80" I can mount the volume on Server2; if I touch a new file, it appears inside the brick on Server1. However, if I try to mount the volume on Server1, I have an error: Mount failed. Please check the log file for more details. The log gives: [2013-11-11 03:33:59.796431] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-0: changing port to 24011 (from 0) [2013-11-11 03:33:59.796810] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-1: changing port to 24009 (from 0) [2013-11-11 03:34:03.794182] I [client-handshake.c:1614:select_server_supported_programs] 0-ARCHIVE80-client-0: Using Program GlusterFS 3.3.2, Num (1298437), Version (330) [2013-11-11 03:34:03.794387] W [client-handshake.c:1320:client_setvolume_cbk] 0-ARCHIVE80-client-0: failed to set the volume (Permission denied) [2013-11-11 03:34:03.794407] W [client-handshake.c:1346:client_setvolume_cbk] 0-ARCHIVE80-client-0: failed to get 'process-uuid' from reply dict [2013-11-11 03:34:03.794418] E [client-handshake.c:1352:client_setvolume_cbk] 0-ARCHIVE80-client-0: SETVOLUME on remote-host failed: Authentication failed [2013-11-11 03:34:03.794426] I [client-handshake.c:1437:client_setvolume_cbk] 0-ARCHIVE80-client-0: sending AUTH_FAILED event [2013-11-11 03:34:03.794443] E [fuse-bridge.c:4256:notify] 0-fuse: Server authenication failed. Shutting down. How comes I can mount on one server and not on the other one???

    Read the article

  • Setting Proxy Server for IE 10 on Windows 8 using pac file and Group Policy

    - by Greg Bray
    We currently use group policy to configure a proxy server PAC file for Windows XP and Windows 7 computers on our network. We now are starting to get requests for Windows 8, but have noticed that our current GPO does not work for setting the proxy server on Windows 8 clients or server 2012. Is it possible to do this using a 2008 R2 domain controller or would we need to update our domain to a 2012 server? I found a reference to creating new GPO settings for "Internet Explorer 10 and 11" and vague references to using RSAT on Windows 8 to set IE 10 settings via preferences, but nothing that talks about using group policy to manage proxy settings.

    Read the article

  • IIS7 url rewrite rules

    - by sympatric greg
    In a hosted environment, I will be utilizing subdomains (and virtual directories) for various coding projects. I have a rewrite rule that changes 'subdomain.domain.com/url' to 'domain.com/subdomain/url'. This worked fine, except that the browser couldn't find resources with paths generated by ResolveURL("~/something"). The server was using the Application Path of "/subdomain/" so based on the rewrite rule, the browser's request for "/subdomain/something" was being looked for in "/subdomain/subdomain/something" were it wasn't to be found. either of these urls were valid: http://www.domain.com/subdomain/something http://subdomain.domain.com/something I resolved this by adding a another url rewrite rule to the subdomain: <rule name="RemoveSuperDir"> <match url="subdomain/(.*)" /> <action type="Rewrite" url="{R:1}" /> </rule> So for each subdomain that I might add, I will need to add such a rule. Is there a way to write a single rule at the domain level to resolve this issue?

    Read the article

  • Forwarding all mail to a single dev box on IIS via virtual SMTP

    - by Greg R
    I am trying to set up a development environment for our web server. I would like all emails that are relayed by the server go to a specific mailbox, regardless of who they were sent to. For example, some application on the server sends an email to [email protected]. I want that email to go to [email protected]. Is that possible to do with IIS/Virtual SMTP? Is there some other way of doing this? I don't have exchange server running, if that makes a difference. Any help would be greatly appreciated. Thanks a lot!

    Read the article

  • Error installing pkgconfig via macports

    - by Greg K
    I installed Macports 1.8.2 from a DMG. That seemed to install fine. I ran sudo port selfupdate to make sure my ports tree was current. I then tried to install bindfs as I want to mount some directories in my OS X file system (like you can do with mount --bind in linux). pkgconfig and macfuse are two dependencies of bindfs. I had trouble installing bindfs due to errors installing pkgconfig, so I tried to just install pkgconfig, here's the debug output from sudo port install pkgconfig: $ sudo port -d install pkgconfig DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port pkgconfig. DEBUG: Requested variant i386 is not provided by port pkgconfig. DEBUG: Requested variant macosx is not provided by port pkgconfig. ---> Computing dependencies for pkgconfig DEBUG: Executing org.macports.main (pkgconfig) DEBUG: Skipping completed org.macports.fetch (pkgconfig) DEBUG: Skipping completed org.macports.checksum (pkgconfig) DEBUG: Skipping completed org.macports.extract (pkgconfig) DEBUG: Skipping completed org.macports.patch (pkgconfig) ---> Configuring pkgconfig DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (pkgconfig) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CPPFLAGS='-I/opt/local/include' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-L/opt/local/lib' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... none checking for gcc... /usr/bin/gcc-4.2 checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 while executing "$procedure $targetname" Warning: the following items did not execute (for pkgconfig): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. I have only recently installed Xcode 3.2.2 (prior to installing macports). Am I right in thinking this the issue here: configure: error: C compiler cannot create executables

    Read the article

  • Microsoft Word 2007 Landscape Printing

    - by Greg Ogle
    When printing document in landscape mode using Microsoft Word 2007, it is print portrait and scaling (varies a little per printer). I made a new document with just text and the text is getting chopped even in print preview. It seems rather weird. Am I doing something wrong?

    Read the article

  • Ubuntu 11.10, using wget/curl fails with ssl

    - by Greg Spiers
    Note: See edit 3 for solution On a completely new install of Ubuntu I'm getting the following errors when using wget: wget https://test.sagepay.com --2012-03-27 12:55:12-- https://test.sagepay.com/ Resolving test.sagepay.com... 195.170.169.8 Connecting to test.sagepay.com|195.170.169.8|:443... connected. ERROR: cannot verify test.sagepay.com's certificate, issued by `/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA': Unable to locally verify the issuer's authority. To connect to test.sagepay.com insecurely, use `--no-check-certificate'. I've tried installing ca-certificates and configuring the ca-certs and they appear to all be setup in /etc/ssl/certs. The same issue exists for cURL: curl https://test.sagepay.com curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed Which leads me to believe it's something wrong with openssl server wide. wget and curl both work correctly locally on OSX and I have confirmed with a few people that it's working on their servers so I suspect it's nothing to do with the server I'm attempting to connect to. Any ideas or suggestions on things to try to narrow it down? Thank you Edit As requested verbose output from curl curl -Iv https://test.sagepay.com * About to connect() to test.sagepay.com port 443 (#0) * Trying 195.170.169.8... connected * Connected to test.sagepay.com (195.170.169.8) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html Edit 2 Using the hash from your comment I see this: ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al 7651b327.0 lrwxrwxrwx 1 root root 59 2012-03-27 12:48 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority.pem ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al Verisign_Class_3_Public_Primary_Certification_Authority.pem lrwxrwxrwx 1 root root 94 2012-01-18 07:21 Verisign_Class_3_Public_Primary_Certification_Authority.pem -> /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -rw-r--r-- 1 root root 834 2011-09-28 14:53 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- But doing the steps myself I end up with a different hash: strace -o /tmp/foo.out curl -Iv https://test.sagepay.com and grep ssl /tmp/foo.out open("/lib/x86_64-linux-gnu/libssl.so.1.0.0", O_RDONLY) = 3 stat("/etc/ssl/certs/415660c1.0", {st_mode=S_IFREG|0644, st_size=834, ...}) = 0 open("/etc/ssl/certs/415660c1.0", O_RDONLY) = 4 stat("/etc/ssl/certs/415660c1.1", 0x7fff7dab07b0) = -1 ENOENT (No such file or directory) readlink -f /etc/ssl/certs/415660c1.0 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- Any other ideas? Thank you for the help so far :) Edit 3 So it turns out that installing the ca-certificates package didn't install the one that I needed. I found this post about certificates being presented out of order. This seems to be the case with my request to sagepay. The solution ended up being to install another CA certificate from Verisign. I'm not sure why this fixes the issue with it being out of order but it does, but I suspect the out of order issue really isn't a problem at all and it was infact because I was missing a certificate all along. The additional certificate is available in that post but I didn't want to blindly trust it. I've looked at the list of CA certificates from cURL's site and it is listed there so I do trust it. The certificate: Verisign Class 3 Public Primary Certification Authority ======================================================= -----BEGIN CERTIFICATE----- MIICPDCCAaUCEHC65B0Q2Sk0tjjKewPMur8wDQYJKoZIhvcNAQECBQAwXzELMAkGA1UEBhMCVVMx FzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmltYXJ5 IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVow XzELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAz IFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUA A4GNADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhEBarsAx94 f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/isI19wKTakyYbnsZogy1Ol hec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0GCSqGSIb3DQEBAgUAA4GBALtMEivPLCYA TxQT3ab7/AoRhIzzKBxnki98tsX63/Dolbwdj2wsqFHMc9ikwFPwTtYmwHYBV4GSXiHx0bH/59Ah WM1pF+NEHJwZRDmJXNycAA9WjQKZ7aKQRUzkuxCkPfAyAw7xzvjoyVGM5mKf5p/AfbdynMk2Omuf Tqj/ZA1k -----END CERTIFICATE----- I put this in a file in: /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt I then modified the /etc/ca-certificates.conf and added the following line at the end: curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt After that I ran the command: sudo update-ca-certificates Looking into the /etc/ssl/certs directory I see it correctly linked: ls -al | grep cURL lrwxrwxrwx 1 root root 69 2012-03-27 16:03 415660c1.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 69 2012-03-27 16:03 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 101 2012-03-27 16:03 Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem -> /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt And everything works! curl -I https://test.sagepay.com HTTP/1.1 200 OK...

    Read the article

  • Windows Question: RunOnce/Second Boot Issues

    - by Greg
    I am attempting to create a Windows XP SP3 image that will run my application on Second Boot. Here is the intended workflow. 1) Run Image Prep Utility (I wrote) on windows to add my runonce entries and clean a few things up. 2) Reboot to ghost, make image file. 3) Package into my ISO and distribute. 4) System will be imaged by user. 5) On first boot, I have about 5 things that run, one of which includes a driver updater (I wrote) for my own specific devices. 6) One of the entries inside of HKCU/../runonce is a reg file, which adds another key to HKLM/../runonce. This is how second boot is acquired. 7) As a result of the driver updater, user is prompted to reboot. 8) My application is then launched from HKLM/../runonce on second boot. This workflow works perfectly, except for a select few legacy systems that contain devices that cause the add hardware wizard to pop up. When the add hardware wizard pops up is when I begin to see problems. It's important to note, that if I manually inspect the registry after the add hardware wizard pops up, it appears as I would expect, with all the first boot scripts having run, and it's sitting in a state I would correctly expect it to be in for a second boot scenario. The problem comes when I click next on the add hardware wizard, it seems to re-run the single entry I've added, and re-executes the runonce scripts. (only one script now as it's already executed and cleared out the initial entries). This causes my application to open as if it were a second boot, only when next is clicked on the add hardware wizard. If I click cancel, and reboot, then it also works as expected. I don't care as much about other solutions, because I could design a system that doesn't fully rely on Microsoft's registry. I simply can't find any information as to WHY this is happening. I believe this is some type of Microsoft issue that's presenting itself as a result of an overstretched image that's expected to support too many legacy platforms, but any help that can be provided would be appreciated. Thanks,

    Read the article

  • Forcing DFS replication on Windows 2003 Web Edition SP2

    - by Greg B
    I have a pair of web servers running Windows Server 2003 Web Edition SP2 (build 3790). There servers are a on a Gigabit LAN and are on the same subnet. I created a new link on the 1st server and added the second box as a target to the link. I configured replication as ring. The 2nd server gets some of the files on the scheduled replication but this varies wildly which makes me think it's not running fully. The folder on server 1 is 1.3GB. Is there some way I can force repication (from a console?) and monitor the progress and see if/when/why it fails.

    Read the article

  • Setup Linksys 3200 remote access

    - by Greg
    I'm trying to setup remote access for my linksys 3200 so that I can configure it through the WAN port. I have turned on remote access, however when I try to connect I get a 404 error. The settings I have are: When I try to access xxx.xxx.xxx.xxx:9999 I just get a 404 error. I have allowed RDP access to a computer behind the router and this works fine on the same IP address. Any idea's on what else I have to do to allow remote management access? UPDATE: I tried changing the port to 80 and it works. Change it back to any other number and it doesn't work. Modem is setup with a DMZ to the router's IP. Why does it only work on port 80? BTW I can't use port 80 because there is a website hosted behind the router.

    Read the article

  • Verification of files on backup media - after the backup

    - by Greg Sansom
    I have a system in place where I back up my work files daily to a portable hard drive. I actually have two portable hard drives - one is stored off-site and I swap them regularly. I also keep my family photos and other historical files backed up, but I only back the photos up occasionally (ie when I have new photos). The backup media is for backup only, and it is unlikely I will ever read the files from the backup media unless a disaster occurs and I lose the master. It worries me that my backed up files could become corrupt without me knowing it. It is also feasible that my master files could become corrupt, and eventually the corrupt files would be replicated to the backup media. I'm currently using Cobian Backup, but I'm open to alternatives. Is there a tool I can use to confirm that the backed up files are identical to the files that were first copied? I know it would be possible to generate a checksum and periodically validate the backup files against the original checksum, but I'm looking for a tool which will do this automatically.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >