Search Results

Search found 10747 results on 430 pages for 'mac 55'.

Page 184/430 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • How to make Mac OS X CrashReporter invoke debugger?

    - by StasM
    I have an appache module on Mac OS X that produces random crashes. I can reproduce these crashes with certain sequence of actions, and these crashes produce Crash Reporter dialog "httpd quit unexpectedly". Is there a way to cause Crash Reporter launch debugger (xcode, gdb, anything) instead of just displaying the backtrace? I've tried running httpd under gdb with httpd -X but the crash doesn't happen then - it happens only if many httpd's are running at once, and I found no way to attach gdb to all of them at once. So I was hoping maybe I can cause CrashReporter to attach the debugger when specific process crashes - is there a way to do it?

    Read the article

  • Where did the text of my .html file go in text edit on mac. [closed]

    - by David
    I recently created a .html file in text edit on my mac (os x 10.5.8). I then opened that .html file in my browser and it showed the page i created just fine. I closed the .html file and text edit and refreshed the page. It still worked fine. Then i opened up the .html file in text edit again and all the text was gone (the page in the browser still works fine though). Where did all the text go?

    Read the article

  • Is it possible to run a compiled program with Xcode on Mac OS X in FreeBSD? (Objective-C/Cocoa)

    - by Eonil
    Hi. I have a plan to build a web-site which running CGI made with Cocoa. My final goal is develop on Mac OS X, and run on FreeBSD. Is this possible? As I know, there is a free implementation of some NextStep classes, the GNUStep. The web-site is almost built with only strings. I read GNUStep documents, classes are enough. DB connection will be made with C interfaces. Most biggest problem which I'm concerning is linking and binary compatibility. I'm currently configuring FreeBSD on VirtualBox, but I wanna know any possibility informations about this from experts. This is not a production server. Just a trial. Please feel free to saying anything.

    Read the article

  • What is the representation of the mac command key in the terminal?

    - by freethinker
    Like control key is represented by a '^' in the terminal, what is the equivalent for the command key (mac)? I am trying to remap my bash shortcuts using stty For eg stty eof ^D But instead of control, I want to use the command key. EDIT: Okay so the issue I was trying to solve was that I wanted to interchange command and control keys because I work on osx and linux and the different key combinations cause me a lot of pain. So I interchanged the modifier keys using osx preferences. But now all the bash shortcuts like Ctrl+C etc had become equivalent of using the key sequences 'cmd+c' - which is not acceptable. Thankfully iTerm2, supports remapping of modifier keys as well, so for iterm2 I reversed them again which means iTerm2 recognizes command as command and control as control. So problem solved for now.

    Read the article

  • On Mac OS X, do you use the shipped python or your own?

    - by The MYYN
    On Tiger, I used a custom python installation to evaluate newer versions and I did not have any problems with that*. Now Snow Leopard is a little more up-to-date and by default ships with $ ls /System/Library/Frameworks/Python.framework/Versions/ 2.3 2.5 2.6 @Current What could be considered best practice? Using the python shipped with Mac OS X or a custom compiled version in, say $HOME. Are there any advantages/disadvantages using the one option over the other? My setup was fairly simple so far and looked like this: Custom compiled Python in $HOME and a $PATH that would look into $HOME/bin first, and subsequently would use my private Python version. Also $PYTHONPATH pointed to this local installation. This way, I did not need to sudo–install packages - virtualenv took care of the rest.

    Read the article

  • How do I fix JavaHL (JNI) Not available after I have changed the logon password on my Mac?

    - by INeedHelp
    I have installed Eclipse 3.5.2 and the plugin Subversion JavaHL Native Library Adapter 1.6.9.2 and this worked without any problems. However, this morning I was forced to change the password to logon to my Mac and since then I get the message that "Subversion native library not available" when I try to save any changes. Can anyone help? I have tried to add this line (-Djava.library.path=/usr/lib/jni) to the eclipse.ini file but this didn´t seem to make any difference. Can anyone help?

    Read the article

  • How do you install cocos2d-iphone on the Mac?

    - by johnfromberkeley
    There are no good instructions for installing cocos2d for iPhone on the mac. I downloaded the current build from git, a folder called "cocos2d-iphone-0.99.1". i put this folder in /Developer/Libary. Q: is this right? I tried running the file called "install_template.sh". it said the templates were already installed. Instead, I manually dragged the templates folders where they belong, and they ~do~ appear in the XCode's "New Project" dialog. When I create a new cocos2d project, I see all these red links for project files, instead of the regular black links. When I try to open them in the finder, nothing happens. I can tell that something is not linked. Can someone please help walk me through this? Thanks!

    Read the article

  • How do I add items to the Finder context menu in Mac OS X?

    - by mystro
    I'm in the process of porting a Windows application to OS X (we wrote it in Java so most of the code is portable), but what I'm currently unsure of is how to add context menu items in the Finder window when the user right clicks on an item (i.e. I wish to add some items to the the menu that has "Open" "Open with" , "Get Info", etc... when the user right clicks). Most of the articles I've found deal specifically with Windows (I've searched for "context menus" and "shell extension", but I believe I may be searching the wrong terms), so I'm curious as to how to go about adding this in Mac or what literature I should be reading.

    Read the article

  • I want to run both MAMP and native local webserver on mac os x 10.6.4

    - by user1065921
    I have set up a local webserver using MAMP on ports 8888 for Apache and 8889 for MySQL - I am using this exclusively for Drupal 6 multisite. I would also like to have a local webserver using the native mac os x capabilities through ports 80 and 3306. Is it possible to run both MAMP local server and native osx webserver concurrently? I have tried to install php on my local server by editing the http.conf file but whenever I open a .php file (any php file) using Firefox I get an infinite loop of blank browser windows opening (FF) or in Safari the actual code of the php file is displayed rather than the php processed page. Have I missed/messed up something? Thanks,

    Read the article

  • wifi not recognized

    - by pumper
    I had wifi and worked then some day ubuntu asked me to update some packeages and restarted the system and after that no wifi. this is my wireless_script output : ########## wireless info START ########## ##### release ##### Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty ##### kernel ##### Linux S510p 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ##### lspci ##### 02:00.0 Network controller [0280]: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter [168c:0036] (rev 01) Subsystem: Lenovo Device [17aa:3026] Kernel driver in use: ath9k 03:00.0 Ethernet controller [0200]: Qualcomm Atheros AR8162 Fast Ethernet [1969:1090] (rev 10) Subsystem: Lenovo Device [17aa:3807] Kernel driver in use: alx ##### lsusb ##### Bus 001 Device 006: ID 0eef:a111 D-WAV Scientific Co., Ltd Bus 001 Device 007: ID 0cf3:3004 Atheros Communications, Inc. Bus 001 Device 004: ID 174f:1488 Syntek Bus 001 Device 003: ID 03f0:5607 Hewlett-Packard Bus 001 Device 002: ID 8087:8000 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 002: ID 15d9:0a4c Trust International B.V. USB+PS/2 Optical Mouse Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub ##### PCMCIA Card Info ##### ##### rfkill ##### 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no 3: hci0: Bluetooth Soft blocked: no Hard blocked: no ##### iw reg get ##### country 00: (2402 - 2472 @ 40), (3, 20) (2457 - 2482 @ 40), (3, 20), PASSIVE-SCAN, NO-IBSS (2474 - 2494 @ 20), (3, 20), NO-OFDM, PASSIVE-SCAN, NO-IBSS (5170 - 5250 @ 40), (3, 20), PASSIVE-SCAN, NO-IBSS (5735 - 5835 @ 40), (3, 20), PASSIVE-SCAN, NO-IBSS ##### interfaces ##### # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto dsl-provider iface dsl-provider inet ppp pre-up /sbin/ifconfig wlan0 up # line maintained by pppoeconf provider dsl-provider auto wlan0 iface wlan0 inet manual ##### iwconfig ##### wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off ##### route ##### Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface ##### resolv.conf ##### ##### nm-tool ##### NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: alx State: unavailable Default: no HW Address: <MAC address removed> Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: ath9k State: unmanaged Default: no HW Address: <MAC address removed> Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points ##### NetworkManager.state ##### [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true ##### NetworkManager.conf ##### [main] plugins=ifupdown,keyfile,ofono dns=dnsmasq no-auto-default=<MAC address removed>, [ifupdown] managed=false ##### iwlist ##### wlan0 Scan completed : Cell 01 - Address: <MAC address removed> Channel:1 Frequency:2.412 GHz (Channel 1) Quality=55/70 Signal level=-55 dBm Encryption key:on ESSID:"mohsen" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000076c342498 Extra: Last beacon: 12ms ago IE: Unknown: 00066D6F6873656E IE: Unknown: 010882848B960C121824 IE: Unknown: 030101 IE: Unknown: 2A0104 IE: Unknown: 32043048606C ##### iwlist channel ##### wlan0 13 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Channel 12 : 2.467 GHz Channel 13 : 2.472 GHz ##### lsmod ##### ath3k 13318 0 bluetooth 395423 23 bnep,ath3k,btusb,rfcomm ath9k 164164 0 ath9k_common 13551 1 ath9k ath9k_hw 453856 2 ath9k_common,ath9k ath 28698 3 ath9k_common,ath9k,ath9k_hw mac80211 626489 1 ath9k cfg80211 484040 3 ath,ath9k,mac80211 ##### modinfo ##### filename: /lib/modules/3.13.0-24-generic/kernel/drivers/bluetooth/ath3k.ko firmware: ath3k-1.fw license: GPL version: 1.0 description: Atheros AR30xx firmware driver author: Atheros Communications srcversion: 98A5245588C09E5E41690D0 alias: usb:v0489pE036d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE03Cd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE02Cd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE003d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3121d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3402d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04C5p1330d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE04Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE056d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE04Ed*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3393d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE057d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0930p0220d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0930p0219d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE005d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE004d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3362d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3008d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3006d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3005d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3004d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3375d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p817Ad*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p311Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3008d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3004d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p0036d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v03F0p311Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE027d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE03Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0930p0215d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3304d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE019d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3002d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3000d*dc*dsc*dp*ic*isc*ip*in* depends: bluetooth intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko license: Dual BSD/GPL description: Support for Atheros 802.11n wireless LAN cards. author: Atheros Communications srcversion: BAF225EEB618908380B28DA alias: platform:qca955x_wmac alias: platform:ar934x_wmac alias: platform:ar933x_wmac alias: platform:ath9k alias: pci:v0000168Cd00000036sv*sd*bc*sc*i* alias: pci:v0000168Cd00000036sv0000185Fsd00003027bc*sc*i* alias: pci:v0000168Cd00000036sv00001B9Asd00002810bc*sc*i* alias: pci:v0000168Cd00000036sv0000144Fsd00007202bc*sc*i* alias: pci:v0000168Cd00000036sv00001A3Bsd00002130bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000612bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000652bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000642bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd0000302Cbc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003027bc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Ebc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Dbc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Cbc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Bbc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Abc*sc*i* alias: pci:v0000168Cd00000036sv00001028sd0000020Ebc*sc*i* alias: pci:v0000168Cd00000036sv0000103Csd0000217Fbc*sc*i* alias: pci:v0000168Cd00000036sv0000103Csd000018E3bc*sc*i* alias: pci:v0000168Cd00000036sv000017AAsd00003026bc*sc*i* alias: pci:v0000168Cd00000036sv00001A3Bsd0000213Abc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000662bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000672bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000622bc*sc*i* alias: pci:v0000168Cd00000036sv0000185Fsd00003028bc*sc*i* alias: pci:v0000168Cd00000036sv0000105Bsd0000E069bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd0000302Bbc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003026bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003025bc*sc*i* alias: pci:v0000168Cd00000036sv00001B9Asd00002812bc*sc*i* alias: pci:v0000168Cd00000036sv00001B9Asd00002811bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00006671bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000632bc*sc*i* alias: pci:v0000168Cd00000036sv0000185Fsd0000A119bc*sc*i* alias: pci:v0000168Cd00000036sv0000105Bsd0000E068bc*sc*i* alias: pci:v0000168Cd00000036sv00001A3Bsd00002176bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003028bc*sc*i* alias: pci:v0000168Cd00000037sv*sd*bc*sc*i* alias: pci:v0000168Cd00000034sv*sd*bc*sc*i* alias: pci:v0000168Cd00000034sv000010CFsd00001783bc*sc*i* alias: pci:v0000168Cd00000034sv000014CDsd00000064bc*sc*i* alias: pci:v0000168Cd00000034sv000014CDsd00000063bc*sc*i* alias: pci:v0000168Cd00000034sv0000103Csd00001864bc*sc*i* alias: pci:v0000168Cd00000034sv000011ADsd00006641bc*sc*i* alias: pci:v0000168Cd00000034sv000011ADsd00006631bc*sc*i* alias: pci:v0000168Cd00000034sv00001043sd0000850Ebc*sc*i* alias: pci:v0000168Cd00000034sv00001A3Bsd00002110bc*sc*i* alias: pci:v0000168Cd00000034sv00001969sd00000091bc*sc*i* alias: pci:v0000168Cd00000034sv000017AAsd00003214bc*sc*i* alias: pci:v0000168Cd00000034sv0000168Csd00003117bc*sc*i* alias: pci:v0000168Cd00000034sv000011ADsd00006661bc*sc*i* alias: pci:v0000168Cd00000034sv00001A3Bsd00002116bc*sc*i* alias: pci:v0000168Cd00000033sv*sd*bc*sc*i* alias: pci:v0000168Cd00000032sv*sd*bc*sc*i* alias: pci:v0000168Cd00000032sv00001043sd0000850Dbc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00001C01bc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00001C00bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001F95bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001195bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001F86bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001186bc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00002001bc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00002000bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Fsd00007197bc*sc*i* alias: pci:v0000168Cd00000032sv0000105Bsd0000E04Fbc*sc*i* alias: pci:v0000168Cd00000032sv0000105Bsd0000E04Ebc*sc*i* alias: pci:v0000168Cd00000032sv000011ADsd00006628bc*sc*i* alias: pci:v0000168Cd00000032sv000011ADsd00006627bc*sc*i* alias: pci:v0000168Cd00000032sv00001C56sd00004001bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002100bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002C97bc*sc*i* alias: pci:v0000168Cd00000032sv000017AAsd00003219bc*sc*i* alias: pci:v0000168Cd00000032sv000017AAsd00003218bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000C708bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000C680bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000C706bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000410Fbc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000410Ebc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000410Dbc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd00004106bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd00004105bc*sc*i* alias: pci:v0000168Cd00000032sv0000185Fsd00003027bc*sc*i* alias: pci:v0000168Cd00000032sv0000185Fsd00003119bc*sc*i* alias: pci:v0000168Cd00000032sv0000168Csd00003122bc*sc*i* alias: pci:v0000168Cd00000032sv0000168Csd00003119bc*sc*i* alias: pci:v0000168Cd00000032sv0000105Bsd0000E075bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002152bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd0000126Abc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002126bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001237bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002086bc*sc*i* alias: pci:v0000168Cd00000030sv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Esv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Dsv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Csv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Bsv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Bsv00001A3Bsd00002C37bc*sc*i* alias: pci:v0000168Cd0000002Asv000010CFsd00001536bc*sc*i* alias: pci:v0000168Cd0000002Asv000010CFsd0000147Dbc*sc*i* alias: pci:v0000168Cd0000002Asv000010CFsd0000147Cbc*sc*i* alias: pci:v0000168Cd0000002Asv0000185Fsd0000309Dbc*sc*i* alias: pci:v0000168Cd0000002Asv00001A32sd00000306bc*sc*i* alias: pci:v0000168Cd0000002Asv000011ADsd00006642bc*sc*i* alias: pci:v0000168Cd0000002Asv000011ADsd00006632bc*sc*i* alias: pci:v0000168Cd0000002Asv0000105Bsd0000E01Fbc*sc*i* alias: pci:v0000168Cd0000002Asv00001A3Bsd00001C71bc*sc*i* alias: pci:v0000168Cd0000002Asv*sd*bc*sc*i* alias: pci:v0000168Cd00000029sv*sd*bc*sc*i* alias: pci:v0000168Cd00000027sv*sd*bc*sc*i* alias: pci:v0000168Cd00000024sv*sd*bc*sc*i* alias: pci:v0000168Cd00000023sv*sd*bc*sc*i* depends: ath9k_hw,mac80211,ath9k_common,cfg80211,ath intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 parm: debug:Debugging mask (uint) parm: nohwcrypt:Disable hardware encryption (int) parm: blink:Enable LED blink on activity (int) parm: btcoex_enable:Enable wifi-BT coexistence (int) parm: bt_ant_diversity:Enable WLAN/BT RX antenna diversity (int) parm: ps_enable:Enable WLAN PowerSave (int) filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko license: Dual BSD/GPL description: Shared library for Atheros wireless 802.11n LAN cards. author: Atheros Communications srcversion: 696B00A6C59713EC0966997 depends: ath,ath9k_hw intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko license: Dual BSD/GPL description: Support for Atheros 802.11n wireless LAN cards. author: Atheros Communications srcversion: 4809F3842A0542CD6B556D3 depends: ath intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath.ko license: Dual BSD/GPL description: Shared library for Atheros wireless LAN cards. author: Atheros Communications srcversion: 88A67C5359B02C5A710AFCF depends: cfg80211 intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 ##### modules ##### lp rtc ##### blacklist ##### [/etc/modprobe.d/blacklist-ath_pci.conf] blacklist ath_pci [/etc/modprobe.d/blacklist.conf] blacklist evbug blacklist usbmouse blacklist usbkbd blacklist eepro100 blacklist de4x5 blacklist eth1394 blacklist snd_intel8x0m blacklist snd_aw2 blacklist i2c_i801 blacklist prism54 blacklist bcm43xx blacklist garmin_gps blacklist asus_acpi blacklist snd_pcsp blacklist pcspkr blacklist amd76x_edac [/etc/modprobe.d/fbdev-blacklist.conf] blacklist arkfb blacklist aty128fb blacklist atyfb blacklist radeonfb blacklist cirrusfb blacklist cyber2000fb blacklist gx1fb blacklist gxfb blacklist kyrofb blacklist matroxfb_base blacklist mb862xxfb blacklist neofb blacklist nvidiafb blacklist pm2fb blacklist pm3fb blacklist s3fb blacklist savagefb blacklist sisfb blacklist tdfxfb blacklist tridentfb blacklist viafb blacklist vt8623fb ##### udev rules ##### # PCI device 0x1969:0x1090 (alx) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<MAC address removed>", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x168c:0x0036 (ath9k) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<MAC address removed>", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="wlan*", NAME="wlan0" ##### dmesg ##### [ 1.707662] psmouse serio1: elantech: assuming hardware version 3 (with firmware version 0x450f03) [ 11.918852] ath: phy0: WB335 1-ANT card detected [ 11.918856] ath: phy0: Set BT/WLAN RX diversity capability [ 11.926438] ath: phy0: Enable LNA combining [ 11.928469] ath: phy0: ASPM enabled: 0x42 [ 11.928473] ath: EEPROM regdomain: 0x65 [ 11.928475] ath: EEPROM indicates we should expect a direct regpair map [ 11.928478] ath: Country alpha2 being used: 00 [ 11.928479] ath: Regpair used: 0x65 [ 14.066021] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready ########## wireless info END ############

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Gimpshop 2.8. Available for Win & Mac. No Linux version?

    - by Jorge M. Treviño
    Finally got around to upgrading 12.04 to 12.10. One of the nice things about the new version is that Gimp 2.8 is in the repositories. Installed and it's a far cry from the 2.2, 2.4 and 2.6 versions which were —at least from my untrained point of view– next to unusable. Now 2.8 is a much more intuitive, for Photoshop users at least, and I'm trying to really learn it. Browsing around, I found that there's s new version of Gimpshop, something that was a sorely amateurish attempt to a PS interface over an old Gimp version and sure to mess up your system. Seeing "2.8" prominently displayed in the page, I decided to try the Windows version. Oddly, there's a Mac version too but no Linux one. The link directs one to a non-existent file in one of the cloud storage sites. After the Win version was installed, I fired it up and, surprise!, it's exactly the same as I can tell without diving into menus and dialogs, as the plain vanilla Ubuntu version I have installed. Can anybody shed light on what goes on here? Is this a scheme to get inadvertent users to install some "optional extras" that come with the installer? Very curious about it (thanks God I'm not a cat ).

    Read the article

  • How to install PySide v0.3.1 on Mac OS X?

    - by ivo
    I'm trying to install PySide v0.3.1 in Mac OS X, for Qt development in python. As a pre-requisite, I have installed CMake and the Qt SDK. I have gone through the documentation and come up with the following installation script: export PYSIDE_BASE_DIR="<my_dir>" export APIEXTRACTOR_DIR="$PYSIDE_BASE_DIR/apiextractor-0.5.1" export GENERATORRUNNER_DIR="$PYSIDE_BASE_DIR/generatorrunner-0.4.2" export SHIBOKEN_DIR="$PYSIDE_BASE_DIR/shiboken-0.3.1" export PYSIDE_DIR="$PYSIDE_BASE_DIR/pyside-qt4.6+0.3.1" export PYSIDE_TOOLS_DIR="$PYSIDE_BASE_DIR/pyside-tools-0.1.3" pushd . cd $APIEXTRACTOR_DIR cmake . cd $GENERATORRUNNER_DIR cmake -DApiExtractor_DIR=$APIEXTRACTOR_DIR . cd $SHIBOKEN_DIR cmake -DApiExtractor_DIR=$APIEXTRACTOR_DIR -DGeneratorRunner_DIR=$GENERATORRUNNER_DIR . cd $PYSIDE_DIR cmake -DShiboken_DIR=$SHIBOKEN_DIR/libshiboken -DGENERATOR=$GENERATORRUNNER_DIR . cd $PYSIDE_TOOLS_DIR cmake . popd Now, I don't know if this installation script is ok, but apparently everything works fine. Each component (apiextractor, generatorrunner, shiboken, pyside-qt and pyside-tools) gets compiled into its own directory. The problem is that I don't quite understand how PySide gets into the system's python environment. In fact, when I start a python shell, I cannot import PySide: >>> import PySide Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named PySide Note: I am aware of the Installing PySide - OSX question, but that question is not relevant anymore, because it is about a specific a dependency on the Boost libraries, but with version 0.3.0 PySide moved from a Boost based source code to a CPython one.

    Read the article

  • A UnicodeDecodeError that occurs with json in python on Windows, but not Mac.

    - by ventolin
    On windows, I have the following problem: >>> string = "Don´t Forget To Breathe" >>> import json,os,codecs >>> f = codecs.open("C:\\temp.txt","w","UTF-8") >>> json.dump(string,f) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python26\lib\json\__init__.py", line 180, in dump for chunk in iterable: File "C:\Python26\lib\json\encoder.py", line 294, in _iterencode yield encoder(o) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 3-5: invalid data (Notice the non-ascii apostrophe in the string.) However, my friend, on his mac (also using python2.6), can run through this like a breeze: > string = "Don´t Forget To Breathe" > import json,os,codecs > f = codecs.open("/tmp/temp.txt","w","UTF-8") > json.dump(string,f) > f.close(); open('/tmp/temp.txt').read() '"Don\\u00b4t Forget To Breathe"' Why is this? I've also tried using UTF-16 and UTF-32 with json and codecs, but to no avail.

    Read the article

  • Can a standalone ruby script (windows and mac) reload and restart itself?

    - by user30997
    I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart. That's where I hit a roadblock. If I were using any sane platform, I could just do: exec('ruby', __FILE__) ...and be done. However, I did the following test: p Process.pid sleep 1 exec('ruby', __FILE__) ...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along. The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall. So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.

    Read the article

  • I'm trying to install psycopg2 onto Mac OS 10.6.3; it claims it can't find "stdarg.h" but I can see

    - by cojadate
    I'm desperately trying to successfully install psycopg2 but keep running into errors. The latest one seems to involve it not being to find "stdarg.h" (see code below). However I can see with my own eyes that a file called stdarg.h exists at /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h (where it claims it can't find anything) so I've no idea what to do about it. I'm running Mac OS 10.6.3 and within the last few days I've made sure I have all the latest OS developer tools. I have Python 2.6.2 and PostgreSQL 8.4 if that makes any difference. python setup.py install running install running build running build_py running build_ext building 'psycopg2._psycopg' extension creating build/temp.macosx-10.3-fat-2.6 creating build/temp.macosx-10.3-fat-2.6/psycopg gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.2.1 (dt dec ext pq3)" -DPG_VERSION_HEX=0x080404 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -DHAVE_PQPROTOCOL3=1 -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -I. -I/opt/local/include/postgresql84 -I/opt/local/include/postgresql84/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.3-fat-2.6/psycopg/psycopgmodule.o In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from psycopg/psycopgmodule.c:27: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from psycopg/psycopgmodule.c:27: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory lipo: can't figure out the architecture type of: /var/folders/MQ/MQ-tWOWWG+izzuZCrAJpzk+++TI/-Tmp-//ccakFhRS.out error: command 'gcc' failed with exit status

    Read the article

  • Why can't I display a unicode character in the Python Interpreter on Mac OS X Terminal.app?

    - by apphacker
    If I try to paste a unicode character such as the middle dot: · in my python interpreter it does nothing. I'm using Terminal.app on Mac OS X and when I'm simply in in bash I have no trouble: :~$ · But in the interpreter: :~$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> ^^ I get nothing, it just ignores that I just pasted the character. If I use the escape \xNN\xNN representation of the middle dot '\xc2\xb7', and try to convert to unicode, trying to show the dot causes the interpreter to throw an error: >>> unicode('\xc2\xb7') Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128) I have setup 'utf-8' as my default encoding in sitecustomize.py so: >>> sys.getdefaultencoding() 'utf-8' What gives? It's not the Terminal. It's not Python, what am I doing wrong?! This question is not related to this question, as that indivdiual is able to paste unicode into his Terminal.

    Read the article

  • What is the right path for PHP includes on a Mac?

    - by skorned
    Running Mac OS X 10.5.8, with PHP 5.2.11 Pre-installed. Using Coda 1.6.10. I'm writing PHP files, and then preview them running from file, not server. This was working fine till I tried PHP includes. These don't work as a relative path, only as an absolute from the root of the drive. Is there any way I can use statements like include_once "common/header.php"; without specifying my entire file path like so : include_once "/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0/common/base.php"; ,where ColoredLists_v1.0 is the directory with all the website files in it. I tried solutions like prepending _SERVER[DOCUMENT_ROOT] or dirname(File) to the file paths, but that didn't work as the variables were not set. Is there any easy way to do this, or a configuration I can change so that it looks in a specific directory by default instead of looking at the drive root? Currently, echo_include_path shows .: When I include this line at the start of the script, it works: set_include_path('/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0'); However, if I want to do this for all my scripts, I can't seem to make the change permanent. Even after I edited the Unix include_path in my php.ini, it doesn't seem to work.

    Read the article

  • Can anyone get app-engine plugin working with Grails on mac os x?

    - by tim
    I have been trying for 4 days to get app-engine and grails working together on my mac to no avail. I am using latest groovy/grails and appengine sdk versions. Im following the app-engine plugin step by step on the grails site.. http://grails.org/plugin/app-engine Groovy Version: 1.7.1 JVM: 1.5.0_22 Grails 1.3.0.RC1 echo $APPENGINE_HOME reveals /Users/markstim/appengine-java-sdk-1.3.2 I perform the following steps 1. grails create-app myapp 2. cd myapp; grails list-plugins reveals hibernate 1.3.0.RC1 -- Hibernate for Grails tomcat 1.3.0.RC1 -- Apache Tomcat plugin for Grails add the following line to Config.groovy google.appengine.application="myapp" install the plugin for app-engine grails install-plugin app-engine and answer 'jpa' when asked (no errors yet) installed plugins list now looks like app-engine 0.8.9 -- Grails AppEngine plugin gorm-jpa 0.7.1 -- GORM-JPA Plugin then grails run-app and get this error as the server is coming up... [java] WARNING: Nested in org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'pluginManager' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is org.codehaus.groovy.grails.exceptions.NewInstanceCreationException: Could not create a new instance of class [GormJpaGrailsPlugin]!: [java] java.lang.NoClassDefFoundError: org.grails.jpa.JpaPluginSupport then if i navigate to localhost:8080 I get HTTP ERROR: 503 Problem accessing /myapp. Reason: SERVICE_UNAVAILABLE Powered by Jetty://

    Read the article

  • MAC : How to check if the file is still being copied in cpp?

    - by Peda Pola
    In my current project, we had a requirement to check if the file is still copying. We have already developed a library which will give us OS notification like file_added , file_removed , file_modified, file_renamed on a particular folder along with the corresponding file path. The problem here is that, lets say if you add 1 GB file, it is giving multiple notification such as file_added , file_modified, file_modified as the file is being copied. Now i decided to surpass these notifications by checking if the file is copying or not. Based on that i will ignore events. I have written below code which tells if the file is being copied or not which takes file path as input parameter. Details:- In Mac, while file is being copied the creation date is set as some date less than 1970. Once it is copied the date is set to current date. Am using this technique. Based on this am deciding that file is being copied. Problem:- when we copy file in the terminal it is failing. Please advice me any approach. bool isBeingCopied(const boost::filesystem::path &filePath) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; bool isBeingCopied = false; if([[[[NSFileManager defaultManager] attributesOfItemAtPath:[NSString stringWithUTF8String:filePath.string().c_str()] error:nil] fileCreationDate] timeIntervalSince1970] < 0) { isBeingCopied = true; } [pool release]; return isBeingCopied; }

    Read the article

  • how do i install PHP with JSON and OAuth on Mac Snow Leopard?

    - by meilas
    i want to use the dropbox api via this library http://code.google.com/p/dropbox-php/ i installed MAMP then I tried "sudo pecl install oauth" but I got downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/oauth-1.0.0 running: /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/configure checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /opt/local/bin/gsed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-apple-darwin10.4.0 checking host system type... i686-apple-darwin10.4.0 checking target system type... i686-apple-darwin10.4.0 checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... no configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking for oauth support... yes, shared checking for cURL in default path... found in /usr checking for ld used by cc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from cc object... rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory checking if cc static flag works... rm: conftest.dSYM: is a directory yes checking if cc supports -fno-rtti -fno-exceptions... rm: conftest.dSYM: is a directory no checking for cc option to produce PIC... -fno-common checking if cc PIC flag -fno-common works... rm: conftest.dSYM: is a directory yes checking if cc supports -c -o file.o... rm: conftest.dSYM: is a directory yes checking whether the cc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.4.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /private/var/tmp/pear-build-root/oauth-1.0.0/libtool --mode=compile cc -I. -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c -o oauth.lo mkdir .libs cc -I. "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c "/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c" -fno-common -DPIC -o .libs/oauth.o In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:29:18: error: pcre.h: No such file or directory In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:37: error: expected '=', ',', ';', 'asm' or 'attribute' before '*' token /usr/include/php/ext/pcre/php_pcre.h:38: error: expected '=', ',', ';', 'asm' or 'attribute' before '*' token /usr/include/php/ext/pcre/php_pcre.h:44: error: expected specifier-qualifier-list before 'pcre' make: * [oauth.lo] Error 1 ERROR: `make' failed

    Read the article

  • Mac OS X Server 10.6 - Apple's software mirrored RAID worth it?

    - by Arko
    Hi, I am installing an Intel Xserve (Quad core Xeon) with Snow Leopard Server (10.6) on two 80Gb 7200rpm SATA HDs. I created a mirrored RAID set using Disk Utility with those two drives, all went fine. I was then asking myself if this is really a good idea. I know that an hardware RAID system would be better, but what about this software RAID? Have you any feedback on this? Will it work fine if one HD breaks down? Does this affect performance? [UPDATE] In short: Hardware RAID is better than software RAID which is better than none. Thank you all for the answers, they were very helpful. Especially Gordon's script to monitor failures. As Apple's software RAID is pretty silent about a drive failure.

    Read the article

  • How do I install PHP with JSON and OAuth on Mac Snow Leopard?

    - by meilas
    I want to use the Dropbox API via this library, http://code.google.com/p/dropbox-php/. I installed MAMP, and then I tried sudo pecl install oauth but I got the following. >>>> downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/oauth-1.0.0 running: /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/configure checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /opt/local/bin/gsed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-apple-darwin10.4.0 checking host system type... i686-apple-darwin10.4.0 checking target system type... i686-apple-darwin10.4.0 checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... no configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking for oauth support... yes, shared checking for cURL in default path... found in /usr checking for ld used by cc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from cc object... rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory checking if cc static flag works... rm: conftest.dSYM: is a directory yes checking if cc supports -fno-rtti -fno-exceptions... rm: conftest.dSYM: is a directory no checking for cc option to produce PIC... -fno-common checking if cc PIC flag -fno-common works... rm: conftest.dSYM: is a directory yes checking if cc supports -c -o file.o... rm: conftest.dSYM: is a directory yes checking whether the cc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.4.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no >>>> creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /private/var/tmp/pear-build-root/oauth-1.0.0/libtool --mode=compile cc -I. -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c -o oauth.lo mkdir .libs cc -I. "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c "/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c" -fno-common -DPIC -o .libs/oauth.o In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:29:18: error: pcre.h: No such file or directory In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:37: error: expected '=', ',', ';', 'asm' or '__attribute__' before '*' token /usr/include/php/ext/pcre/php_pcre.h:38: error: expected '=', ',', ';', 'asm' or '__attribute__' before '*' token /usr/include/php/ext/pcre/php_pcre.h:44: error: expected specifier-qualifier-list before 'pcre' make: *** [oauth.lo] Error 1 ERROR: `make' failed </block>

    Read the article

  • How can Bonjour be setup to function over a VPN connection using Mac OS X — Mountain Lion Server?

    - by Ben Coppock
    I purchased Mountain Lion Server for our office thinking that Bonjour would automatically enable any computers connected via VPN to see all computers and applications (such as Bento) running on the office network. The hope was that those of us working at home would feel just like we were in the office, with all network services working transparently over the VPN connection. However, I see that Bonjour (aka mDNS) is not enabled to work over the VPN by default. Can I configure Mountain Lion Server to automatically pass Bonjour traffic over the VPN? Is there any reason not to do this?

    Read the article

  • How can I telnet to an IPv6 host using Mac OS X?

    - by Nate
    I’m testing IPv6 on a corporate network and having problems with OS X. With most IPv6 commands, such as telnet -6 or traceroute6, I get the error: connect: No route to host For example, I have a web server. This fails: $ telnet -6 fe80::… 80 # this fails I know the server is reachable because ping6 works (note that I have to use the -I argument): $ ping6 -I en1 fe80::… # this works And I know the web server is running because I can telnet to it from Windows: C:\> telnet fe80::… 80 # this works I suspect there is some configuration flag or command-line argument that I am missing.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >