Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 445/609 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • Infinite detail inside Perlin noise procedural mapping

    - by Dave Jellison
    I am very new to game development but I was able to scour the internet to figure out Perlin noise enough to implement a very simple 2D tile infinite procedural world. Here's the question and it's more conceptual than code-based in answer, I think. I understand the concept of "I plug in (x, y) and get back from Perlin noise p" (I'll call it p). P will always be the same value for the same (x, y) (as long as the Perlin algorithm parameters haven't changed, like altering number of octaves, et cetera). What I want to do is be able to zoom into a square and be able to generate smaller squares inside of the already generated overhead tile of terrain. Let's say I have a jungle tile for overhead terrain but I want to zoom in and maybe see a small river tile that would only be a creek and not large enough to be a full "big tile" of water in the overhead. Of course, I want the same net effect as a Perlin equation inside a Perlin equation if that makes sense? (aka. I want two people playing the game with the same settings to get the same terrain and details every time). I can conceptually wrap my head around the large tile being based on an "zoomed out" coordinate leaving enough room to drill into but this approach doesn't make sense in my head (maybe I'm wrong). I'm guessing with this approach my overhead terrain would lose all of the cohesiveness delivered by the Perlin. Imagine I calculate (0, 0) as overhead tile 1 and then to the east of that I plug in (50, 0). OK, great, I now have 49 pixels of detail I could then "drill down" into. The issue I have in my head with this approach (without attempting it) is that there's no guarantee from my Perlin noise that (0,0) would be a good neighbor to (50,0) as they could have wildly different "elevations" or p/resultant values returning from the Perlin equation when I generate the overhead map. I think I can conceive of using the Perlin noise for the overhead tile to then reuse the p value as a seed for the "detail" level of noise once I zoom in. That would ensure my detail Perlin is always the same configuration for (0,0), (1,0), etc. ad nauseam but I'm not sure if there are better approaches out there or if this is a sound approach at all.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 22, 2011

    CodePlex Daily Summary for Tuesday, November 22, 2011Popular ReleasesDeveloper Team Article System Management: DTASM v1.3: ?? ??? ???? 3 ????? ???? ???? ????? ??? : - ????? ?????? ????? ???? ?? ??? ???? ????? ?? ??? ? ?? ???? ?????? ???? ?? ???? ????? ?? . - ??? ?? ???? ????? ???? ????? ???? ???? ?? ????? , ?????? ????? ????? ?? ??? . - ??? ??????? ??? ??? ???? ?? ????? ????? ????? .VideoLan DotNet for WinForm, WPF & Silverlight 5: VideoLan DotNet for WinForm, WPF, SL5 - 2011.11.22: The new version contains Silverlight 5 library: Vlc.DotNet.Silverlight. A sample could be tested here The new version add and correct many features : Correction : Reinitialize some variables Deprecate : Logging API, since VLC 1.2 (08/20/2011) Add subitem in LocationMedia (for Youtube videos, ...) Update Wpf sample to use Youtube videos Many others correctionsSharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...Anno 2070 Assistant: v0.1.0 (STABLE): Version 0.1.0 Features Production Chains Eco Production Chains (Complete) Tycoon Production Chains (Disabled - Incomplete) Tech Production Chains (Disabled - Incomplete) Supply (Disabled - Incomplete) Calculator (Disabled - Incomplete) Building Layouts Eco Building Layouts (Complete) Tycoon Building Layouts (Disabled - Incomplete) Tech Building Layouts (Disabled - Incomplete) Credits (Complete)Free SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadedVsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 30 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 30 (beta)New: Support for TortoiseSVN 1.7 added. (the download contains both setups, for TortoiseSVN 1.6 and 1.7) New: OpenModifiedDocumentDialog displays conflicted files now. New: OpenModifiedDocument allows to group items by changelist now. Fix: OpenModifiedDocumentDialog caused Visual Studio 2010 to freeze sometimes. Fix: The installer didn...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblyAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.36: Fix for issue #16908: string literals containing ASP.NET replacement syntax fail if the ASP.NET code contains the same character as the string literal delimiter. Also, we shouldn't be changing the delimiter for those literals or combining them with other literals; the developer may have specifically chosen the delimiter used because of possible content inserted by ASP.NET code. This logic is normally off; turn it on via the -aspnet command-line flag (or the Code.Settings.AllowEmbeddedAspNetBl...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.7: Reworked API to be more consistent. See Supported formats table. Added some more helper methods - e.g. OpenEntryStream (RarArchive/RarReader does not support this) Fixed up testsSilverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...New ProjectsAndrecorder: Andrecorder???Android???????,???????????????????,????????????????,????????!Android Tree Bulletin: Android bulletin reader in tree format.Bài t?p l?p môn HCI: Name: Ph?n m?m qu?n lý thu h?c phí tru?ng d?i h?c Công Nghi?p Hà N?i Basic Grid Collision sample in XNA: This project shows how to implement a basic grid collision in XNA. The project uses the XNA 4.0 framework and C#Club Manager: Club Manager is a web site for managing sport clubs / teams.Create email with encrypt text implement TEA encryption and Web Service: RahaTEA Mail is an application to send messages in secret. These applications implement TEA encryption and web serviceCRM 2011 Layers: Several .net layers to customize CRM 2011CTEF: China Tomorrow Education Foundation websitedns?????: ??c#???dns?????。????????,???????,??????。EAF: Extensibility Application FrameworkEnergy SBA: In order to compete with large companies for Federal contracts, small business need information. This application seeks to show standard methods of using remote APIs to integrate information into a Metro interface using services provided by the Small Business Administration (SBA)EPiOptimiser - Scan your EPiServer configuration to optimise start up times: EPiScanner scans your EPiServer configuration to optimise start ups by generating a recommended exclude list of assemblies to include in EPiServer framework config. It can be used on command line, as a custom build task or integrated into Visual Studio as an external tool.FreeIDS - Free Intrusion Detection System: Don't want someone to use your computer? Don't want to use a system password? Want to see when someone accessed your computer? Time/Date? FreeIDS is it!FtpServerAdministrator: FtpServerAdministrator makes it easier to administer some ftp server by code, although it can only be used for FileZilla server now. It's developed in C#.GreenPoint Online: Tools and components that help you customize an Office 365 / SharePoint Online Environment.HCC C# Workshop: This project contains the code for the exercises of the HCC C# WorkshopKsigDo - Real time view model syncing across user screens: KsigDo show real time view model syncing across user screens - using ASP.NET, Knockout and SignalR. Real time data syncing across user views *was* hard, especially in web applications. Most of the time, the second user needs to refresh the screen, to see the changes made by first user, or we need to implement some long polling that fetches the data and does the update manually. Now, with SignalR and Knockout, ASP.NET developers can take advantage of view model syncing across users, that...lineseven: ???????????????。Mail Size Labeler for GMail: A small utility that labels large e-mails on your gmail account. This utility scan you gmail account, and adds labels to large e-mail so you can clean your mailbox and free space. The labels this utility adds are: Size 1M-2M Size 2M-5M Size 5M-10M Size 10M-15M Size 15M plus Note: a single e-mail thread may get multiple labels if different e-mails of the thread fit different filters.MathService: Complex digits, standart class extentions etc.MyGameProject: gamesMySQL Connect 2 ASP.NET: Example project to show how to connect MySQL database to ASP.NET web project. IDE: Visual Studio 2010 Pro Programming language: C# Detailed information in the article here: http://epavlov.net/blog/2011/11/13/connect-to-mysql-in-visual-studio/ nl: Nutri Leaf Devomr.event.js: Simple js event injecterPastebin4DotNet: This project is an example of how to consume an API, in this case I consummed the Pastebin API.Pomelo: Pomelo is a website example.QuickDevFrameWork: ????????,??,??,????,ioc ?????postsharp?aopReadable Passphrase Generator: Generates passphrases which are (mostly) grammatically correct but nonsensical. These are easy to remember but difficult to guess (for humans or computers). Developed in C# with a KeePass plugin, console app and public API.Rosyama.ru for Windows Phone 7: ?????????? Windows Phone 7 ??? ???????? ???????? ?? ???? rosyama.ru. ?????????? ??????? ?????????? ? ???????? ????????? ???????. SimpleBatch: As the name suggests, this is a simple batch framework allowing you to define batch jobs in XML format. Thus far, contains a basic selection of processors such as the following; File Email SQL (SQL Server Client) SharePoint Document Library Custom ProcessorSite de Notícias: Projeto de faculdade que consiste na criação de um site de notícias.SPWikiProvisioning: Create update and delete SharePoint wiki pages using feature activation and deactivation handlers.SVN Automated Control With C#: I Created this libaray because I need to control Tortoise SVN automactically with out an interface for my own build server and could not find any resuilts on google to achive this task so I went about creating this libaray which dos most of the task's that I needed. I round that you could control SVN by command line so using that as my basic idear I went about coding the most common commands for SVN most of the commads are done but not all. if you like this libaray then please use it we...TremplinCMS: TremplinCMS is a CMS framework for ASP .NET 4.vlu0206sms: SMSMaker by team0206 developingWCF DataService RequestStream Access on webInvoke HTTP POST: This library provides access to the message body request stream of a WCF Data Service (formerly ADO.NET Data Service), which is not possible with the original WCF Data Service class. You are enabled passing data (e.g. Json, files) via HTTP POST to the request body. It uses the operation context (DbContext) provided by the DataService<T> class to get access to the resquest stream.WebOS: Welcome to join us to build our os projectWp7StarterDantas: Iniciando com Wp7WpfCollaborative3D: WpfCollaborative3DXNA Content Preprocessor: The XNA Content Preprocessor allows you to compile all of your XNA assets outside of your normal XNA project. This means more time building your game or app instead of your content.

    Read the article

  • debootstrap or virt-install Ubuntu Server Maverick fails

    - by poelinca
    Oki so running any kind of variation of debootsrap i get the following error I: Extracting zlib1g... W: Failure trying to run: chroot /lxc/iso/dodo mount -t proc proc /proc debootstrap.log : mount: permission denied if i manualy chroot into the directory then i get promted with: id: cannot find name for group ID 0 I have no name!@...# i tryed addgroup but it's not installed , apt-get/aptitude : command not found , so i can't do anything with it . I've tryed ubuntu-vm-builder but since it's calling debootstrap i get the same error . Played with it for a few days and then i stoped and gaved virt-install a try , everithing works till i get to the console to finish the install witch shows only : Escape character is ^] and nothing more , no matter what i type . So basicly what i'm trying to do is build a usable chroot system so i can use it with lxc or libvirt . What are my options to get containers/virtualisation up and running ? I've read somewhere that i can use openvz templates with lxc or libvirt ? but how ? Let me know if you need aditional info ( p.s. doing all this on a dedicated server so i can't access it by hand , only ssh , plus on my local pc running ubuntu desktop maverick everithing works ) . EDIT Getting closer , i managed to understand how to use an openvz template with lxc , now the problem comes with the network bridge lxc-start: invalid interface name: br0 # Use same bridge device used in your controlling host setup lxc-start: failed to process 'lxc.network.link = br0 # Use same bridge device used in your controlling host setup ' lxc-start: failed to read configuration file i followed the exact steps to create a bridge and lxc conf looks like: lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 # Use same bridge device used in your controlling host setup lxc.network.hwaddr = {a1:b2:c3:d4:e5:f6} # As appropiate (line only needed if you wish to dhcp later) lxc.network.ipv4 = {10.0.0.100} # (Use 0.0.0.0 if you wish to dhcp later) lxc.network.name = eth0 # could likely be whatever you want Since it's not working i know smth is wrong so could somebody guyde me ? EDIT , looks like the base install was using an custom kernel ( bzImage-2.6.34.6-xxxx-grs-ipv6-65 ) for witch you i didn't found the headers , i did a update-grub after i installed a new kernel , edited menu.lst and no it's using 2.6.35-23-server and now debootstrap is working just fine same as ubuntu-vm-builder .

    Read the article

  • No HDMI audio in 13.04

    - by King84
    I have just upgraded from 12.10 to 13.04 and now everything works perfectly, except the fact that I have no audio via HDMI. I am using a Samsung tv-monitor connected via HDMI to my video card Asus EAH4670/DI/1GD3 (which has a Radeon HD 4670 gpu on it), installed phisically into my motherboard which is a MSI 770-C45. I am running kernel 3.9, I just have no sound. I tried downloading and installing https://code.launchpad.net/~ubuntu-audio-dev/+archive/alsa-daily/+files/oem-audio-hda-daily-dkms_0.201304261252~raring1_all.deb , but without any good result. Please help, I need my audio back. In the end, this is my lspci command output. ale@beast:~$ lspci 00:00.0 Host bridge: Advanced Micro Devices [AMD] nee ATI RX780/RX790 Host Bridge 00:02.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI RD790 PCI to PCI bridge (external gfx0 port A) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI RD790 PCI to PCI bridge (PCI express gpp port C) 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] 00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller 00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 3c) 00:14.1 IDE interface: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 IDE Controller 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge 00:14.5 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI2 Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Link Control 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV730 XT [Radeon HD 4670] 01:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI RV710/730 HDMI Audio [Radeon HD 4000 series] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168 PCI Express Gigabit Ethernet controller (rev 03) ale@beast:~$

    Read the article

  • Configuring Multi-Tap on Synpaptics Touchpad

    - by nunos
    I am having a hard time configuring my notebook's touchpad. The touchpad already works. It succesfully responds to one-finger tap, two-finger tap and two-finger vertical scrolling. What I want to accomplish: change two-finger tap action from right-mouse click to middle-mouse click add three-finger tap functionality to yield right-mouse click action I read on a forum to use this as a guide. I have succesfully accomplished point 1 with synclient TapButton2=2. However, I have to do it everytime I log in. I have tried to put that command on /etc/rc.local but the computer always boots and logins with the default configuration. Regarding point 2, I have tried synclient TapButton3=3 but it doesn't do anything when I three-finger tap the touchpad. I am running Ubuntu 11.10 on an Asus N82JV. /etc/X11/xorg.conf: nuno@mozart:~$ cat /etc/X11/xorg.conf Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3" EndSection /usr/share/X11/xorg.conf.d/50-synaptics.conf: nuno@mozart:~$ cat /usr/share/X11/xorg.conf.d/50-synaptics.conf # Example xorg.conf.d snippet that assigns the touchpad driver # to all touchpads. See xorg.conf.d(5) for more information on # InputClass. # DO NOT EDIT THIS FILE, your distribution will likely overwrite # it when updating. Copy (and rename) this file into # /etc/X11/xorg.conf.d first. # Additional options may be added in the form of # Option "OptionName" "value" # Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3" EndSection xinput list: nuno@mozart:~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=12 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=13 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad id=16 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? USB2.0 2.0M UVC WebCam id=10 [slave keyboard (3)] ? Microsoft Microsoft® Nano Transceiver v2.0 id=11 [slave keyboard (3)] ? Asus Laptop extra buttons id=14 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=15 [slave keyboard (3)]

    Read the article

  • Solaris Tips : Assembler, Format, File Descriptors, Ciphers & Mount Points

    - by Giri Mandalika
    .roundedcorner { border:1px solid #a1a1a1; padding:10px 40px; border-radius:25px; } .boxshadow { padding:10px 40px; box-shadow: 10px 10px 5px #888888; } 1. Most Oracle software installers need assembler Assembler (as) is not installed by default on Solaris 11.      Find and install eg., # pkg search assembler INDEX ACTION VALUE PACKAGE pkg.fmri set solaris/developer/assembler pkg:/developer/[email protected] # pkg install pkg:/developer/assembler Assembler binary used to be under /usr/ccs/bin directory on Solaris 10 and prior versions.      There is no /usr/ccs/bin on Solaris 11. Contents were moved to /usr/bin 2. Non-interactive retrieval of the entire list of disks that format reports If the format utility cannot show the entire list of disks in a single screen on stdout, it shows some and prompts user to - hit space for more or s to select - to move to the next screen to show few more disks. Run the following command(s) to retrieve the entire list of disks in a single shot. format 3. Finding system wide file descriptors/handles in use Run the following kstat command as any user (privileged or non-privileged). kstat -n file_cache -s buf_inuse Going through /proc (process filesystem) is less efficient and may lead to inaccurate results due to the inclusion of duplicate file handles. 4. ssh connection to a Solaris 11 host fails with error Couldn't agree a client-to-server cipher (available: aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,arcfour) Solution: add 3des-cbc to the list of accepted ciphers to sshd configuration file. Steps: Append the following line to /etc/ssh/sshd_config Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,\ arcfour,3des-cbc Restart ssh daemon svcadm -v restart ssh 5. UFS: Finding the last mount point for a device fsck utility reports the last mountpoint on which the filesystem was mounted (it won't show the mount options though). The filesystem should be unmounted when running fsck. eg., # fsck -n /dev/dsk/c0t5000CCA0162F7BC0d0s6 ** /dev/rdsk/c0t5000CCA0162F7BC0d0s6 (NO WRITE) ** Last Mounted on /export/oracle ** Phase 1 - Check Blocks and Sizes ... ...

    Read the article

  • ArchBeat Facebook Friday: Top 10 Posts - August 8-14, 2014

    - by Bob Rhubart-Oracle
    5,307 people pay attention to the OTN ArchBeat Facebook Page. Here are the Top 10 posts from that page for the last seven days, August 8-14, 2014. Podcast: ODTUG Kscope 2014: Anatomy of a User Conference - Part 3 There is more to a great user conference than a shared interest in Oracle products. In the final segment of this 3-part OTN ArchBeat Podcast panelists Danny Bryant , Chet "ORACLENERD" Justice, Cameron Lackpour, Debra Lilley, and Mike Riley discuss the nature and importance of community Oracle SOA Suite 12c: The LDAP Adapter quick and easy | Maarten Smeets Maarten Smeets' how-to post describes the installation and configuration of an LDAP server and browser (ApacheDS and Apache Directory Studio). Process level Exception Handling in BPM12c | Abhishek Mittal When an exception occurs while running a process flow you have two choices: 1) retry running the flow object that caused that process flow or 2) move the process instance to the next flow object in the main process flow. Abhishek Mittal shows you how to do both. Building a Responsive WebCenter Portal Application | JayJay Zheng Oracle ACE JayJay Zheng's article addresses the essentials of responsive web design, shows you how to design and develop a responsive WebCenter Portal application, and reviews key development considerations. Cloud Control authorization with Active Directory | Jeroen Gouma Jeroen Gouma takes you step-by-step through the user authortization process in Oracle Enterprise Manager Cloud Control 12c. Video: CIOs Guide to Oracle Products and Solutions | Jessica Keyes The CIO's Guide to Oracle Products and Solutions author Jessica Keyes talks about why input from users and developers is essential to CIOs who want to avoid being escorted out of the building by security guards. Read A CIO's Guide to Oracle Cloud Computing, a sample chapter from the book. Twitter Tuesday - Top 10 @ArchBeat Tweets - August 5-11, 2014 @OTNArchBeat followers from across the galaxy have spoken! Here are the Top 10 tweets for the past seven days. Topics include: Hyperion, OBIEE, ODI, Oracle MAF, and SOA Suite. Recap: Fusion Middleware Summer Camps - Lisbon 2014 | Simon Haslam Oracle ACE Director Simon Haslam's recap of his experience at the Oracle Fusion Middleware Summer Camp in Lisbon, Portugal will make you wish you had been there. WebLogic Data Source Connection Labeling | Steve Felts The connection labeling feature was added in WLS release 10.3.6, and enhanced in release WLS 12.1.3. This post by Steve Felts describes two new connection properties that can be configured on the data source descriptor. Why Mobile Apps <3 REST/JSON | Martin Jarvis Martin Jarvis explores the preference for REST and JSON over SOAP and XML for mobile web services.

    Read the article

  • Network is not working anymore - Ubuntu 12.04

    - by Jonathan
    Network is not working anymore - Ubuntu 12.04 Hello, I have a problem with my network connection. I have been using the same laptop with Ubuntu and the same connection for more than a year, and suddenly yesterday the connection stopped working (both wireless and wired). I've tested with another computer and the connection is fine (both wireless and wired). I've been reading similar posts but I haven't found a solution yet. I tried a few commands that I'm posting here (my system is in spanish, so I have traslated it to english, maybe the terms are not accurate): grep -i eth /var/log/syslog | tail Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): now managed Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): device state change: unmanaged - unavailable (reason 'managed') [10 20 2] Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): bringing up device. Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): preparing device. Jun 3 18:45:40 vanesa-pc kernel: [ 7351.845743] forcedeth 0000:00:0a.0: irq 41 for MSI/MSI-X Jun 3 18:45:40 vanesa-pc kernel: [ 7351.845984] forcedeth 0000:00:0a.0: eth0: no link during initialization Jun 3 18:45:40 vanesa-pc kernel: [ 7351.847103] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): deactivating device (reason 'managed') [2] Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: Added default wired connection 'Wired connection 1' for /sys/devices/pci0000:00/0000:00:0a.0/net/eth0 Jun 3 18:45:40 vanesa-pc kernel: [ 7351.848817] ADDRCONF(NETDEV_UP): eth0: link is not ready ifconfig -a eth0 Link encap:Ethernet addressHW 00:1b:24:fc:a8:d1 ACTIVE BROADCAST MULTICAST MTU:1500 Metric:1 Packages RX:0 errors:16 lost:0 overruns:0 frame:16 Packages TX:123 errors:0 lost:0 overruns:0 carrier:0 colissions:0 length.tailTX:1000 Bytes RX:0 (0.0 B) TX bytes:26335 (26.3 KB) Interruption:41 Base address: 0x2000 lo Link encap:Local loop Inet address:127.0.0.1 Mask:255.0.0.0 Inet6 address: ::1/128 Scope:Host ACTIVE LOOP WORKING MTU:16436 Metrics:1 Packages RX:1550 errors:0 lost:0 overruns:0 frame:0 Packages TX:1550 errors:0 lost:0 overruns:0 carrier:0 colissions:0 long.tailTX:0 Bytes RX:125312 (125.3 KB) TX bytes:125312 (125.3 KB) iwconfig lo no wireless extensions. eth0 no wireless extensions. sudo lshw -C network *-network description: Ethernet interface product: MCP67 Ethernet manufacturer: NVIDIA Corporation Physical id: a bus information: pci@0000:00:0a.0 logical name: eth0 version: a2 series: 00:1b:24:fc:a8:d1 capacity: 100Mbit/s width: 32 bits clock: 66MHz capacities: pm msi ht bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=forcedeth driverversion=0.64 latency=0 link=no maxlatency=20 mingnt=1 multicast=yes port=MII resources: irq:41 memoria:f6288000-f6288fff ioport:30f8(size=8) memoria:f6289c00-f6289cff memoria:f6289800-f628980f lsmod Module Size Used by usbhid 41906 0 hid 77367 1 usbhid rfcomm 38139 0 parport_pc 32114 0 ppdev 12849 0 bnep 17830 2 bluetooth 158438 10 rfcomm,bnep binfmt_misc 17292 1 joydev 17393 0 hp_wmi 13652 0 sparse_keymap 13658 1 hp_wmi nouveau 708198 3 ttm 65344 1 nouveau drm_kms_helper 45466 1 nouveau drm 197692 5 nouveau,ttm,drm_kms_helper i2c_algo_bit 13199 1 nouveau psmouse 87213 0 mxm_wmi 12859 1 nouveau serio_raw 13027 0 k8temp 12905 0 i2c_nforce2 12906 0 wmi 18744 2 hp_wmi,mxm_wmi video 19068 1 nouveau mac_hid 13077 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp forcedeth 58096 0 Let me know if I can give you more information. Thank you very much in advance, Jonathan

    Read the article

  • E-Business Suite Sessions at Sangam 2013 in Hyderabad

    - by Sara Woodhull
    The Sangam 2013 conference, sponsored jointly by the All-India Oracle Users' Group (AIOUG) and India Oracle Applucations Users Group (IOAUG), will be in Hyderabad, India on November 8-9, 2013.  This year, the E-Business Suite Applications Technology Group (ATG) will offer two speaker sessions and a walk-in usability test of upcoming EBS user interface features.  It's only about two weeks away, so make your plans to attend if you are in India. Sessions Oracle E-Business Suite Technology: Latest Features and Roadmap Veshaal Singh, Senior Director, ATG Development Friday, Nov. 9, 11:00-12:00 This Oracle development session provides an overview of Oracle's product strategy for Oracle E-Business Suite technology, the capabilities and associated business benefits of recent releases, and a review of capabilities on the product roadmap. This is the cornerstone session for Oracle E-Business Suite technology. Come hear about the latest new usability enhancements of the user interface; systems administration and configuration management tools; security-related updates; and tools and options for extending, customizing, and integrating Oracle E-Business Suite with other applications. Integration Options for Oracle E-Business Suite Rekha Ayothi, Lead Product Manager, ATG Friday, Nov. 9, 2:00-3:00 In this Oracle development session, you will get an understanding of how, when and where you can leverage Oracle's integration technologies to connect end-to-end business processes across your enterprise, including your Oracle Applications portfolio. This session offers a technical look at Oracle E-Business Suite Integrated SOA Gateway, Oracle SOA Suite, Oracle Application Adapters for Data Integration for Oracle E-Business Suite, and other options for integrating Oracle E-Business Suite with other applications. Usability Testing There will be multiple opportunities to participate in usability testing at Sangam '13.  The User Experience team is running a one-on-one usability study that requires advance registration.  In addition, we will be hosting a special walk-in usability lab to get feedback for new Oracle E-Business Suite OA Framework features.  The walk-in lab is a shorter usability experience that does not require any pre-registration.  In both cases, Oracle wants your feedback!  Even if you only have a few minutes, come by the User Experience Lab, meet the team, and try the walk-in lab.

    Read the article

  • openssl/rand.h header file not found

    - by Arun Reddy Kandoor
    I have installed libssl-dev package but that did not install the include files. How do I get the openssl include files? Appreciate your help. Checking for program g++ or c++ : /usr/bin/g++ Checking for program cpp : /usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /usr/bin/ranlib Checking for g++ : ok Checking for node path : ok /usr/bin/node Checking for node prefix : ok /usr Checking for header openssl/rand.h : not found /home/arun/Documents/webserver/node_modules/bcrypt/wscript:30: error: the configuration failed (see '/home/arun/Documents/webserver/node_modules/bcrypt/build/config.log') npm ERR! error installing [email protected] npm ERR! [email protected] preinstall: `node-waf clean || (exit 0); node-waf configure build` npm ERR! `sh "-c" "node-waf clean || (exit 0); node-waf configure build"` failed with 1 npm ERR! npm ERR! Failed at the [email protected] preinstall script. npm ERR! This is most likely a problem with the bcrypt package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node-waf clean || (exit 0); node-waf configure build npm ERR! You can get their info via: npm ERR! npm owner ls bcrypt npm ERR! There is likely additional logging output above. npm ERR! npm ERR! System Linux 3.8.0-32-generic npm ERR! command "node" "/usr/bin/npm" "install" npm ERR! cwd /home/arun/Documents/webserver npm ERR! node -v v0.6.12 npm ERR! npm -v 1.1.4 npm ERR! code ELIFECYCLE npm ERR! message [email protected] preinstall: `node-waf clean || (exit 0); node-waf configure build` npm ERR! message `sh "-c" "node-waf clean || (exit 0); node-waf configure build"` failed with 1 npm ERR! errno {} npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/arun/Documents/webserver/npm-debug.log npm not ok

    Read the article

  • K2 4.5 Quick Thoughts

    I just finished attending a webcast on K2 4.5 and I thought Id share a few quick thoughts. Power User Story Improved Given it is just a presentation and I havent actually played with it, the story seemed improved and more believable that real world power users would be able to define workflows in SharePoint.  Power users who would be comfortable with Excel functions may be able to do some more worthwhile workflows since there is new support for inline functions and conditions.  The new SilverLight K2 designer seems pretty user friendly, though the dialog windows can really stack up which may get confusing.  I thought the neatest part was that the workflow can be defined just by starting with a SharePoint Lists settings which may be okay for some organizations and simpler workflows that dont need to define the workflow and push it through lots of testing in different environments.  The standalone K2 Studio is back.  In K2 2003 it was required because Visual Studio integration didnt exist.  Its back now for use by power users who need functionality up to the point of code.  Not sure if this Administration/Installation Installation is supposed to be simplified, with unattended install and other details I didnt catch.  Install and configuration has always seemed daunting to me so anything to improve that is good.  Related to that there is a new tool that is meant to help diagnose issues in your installation.  That may include figuring out missing permissions or services that arent running.  Also, now all K2 SharePoint features deployed as solutions. Dynamic SQL Service Broker Create a smart object to go against a table that you created, NOT the SmartBox.  This seems promising and something that maybe should have been there all along. Reference Event Allows you to call functionally that youve referenced, in the sample showing it was calling a web service that was referenced.  It seemed odd because it was really like writing code using dialogs (call constructor, set timeout, call web service method).  Seemed a little odd to me. Help We were reminded that help.k2.com site is newish site that is supposed to be the MSDN of K2 for partners and customers. VS 2010 Support Still no hard date on this, but what we were told is approximately 90 days after VS 2010 is officially released.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Upgrade ASP.NET MVC 2 Project to ASP.NET MVC 3

    - by mbridge
    ASP.NET MVC 3 can be installed side by side with ASP.NET MVC 2 on the same computer, which gives you flexibility in choosing when to upgrade an ASP.NET MVC 2 application to ASP.NET MVC 3. The simplest way to upgrade is to create a new ASP.NET MVC 3 project and copy all the views, controllers, code, and content files from the existing MVC 2 project to the new project and then to update the assembly references in the new project to match the old project. If you have made changes to the Web.config file in the MVC 2 project, you must also merge those changes with the Web.config file in the MVC 3 project. To manually upgrade an existing ASP.NET MVC 2 application to version 3, do the following: 1. In both Web.config files in the MVC 3 project, globally search and replace the MVC version. Find the following: System.Web.Mvc, Version=2.0.0.0 Replace it with the following System.Web.Mvc, Version=3.0.0.0 There are three changes in the root Web.config and four in the Views\Web.config file. 2. In Solution Explorer, delete the reference to System.Web.Mvc (which points to the version 2 DLL). Then add a reference to System.Web.Mvc (v3.0.0.0). 3. In Solution Explorer, right-click the project name and then select Unload Project. Then right-click again and select Edit ProjectName.csproj. 4. Locate the ProjectTypeGuids element and replace {F85E285D-A4E0-4152-9332-AB1D724D3325} with {E53F8FEA-EAE0-44A6-8774-FFD645390401}. 5. Save the changes and then right-click the project and select Reload Project. 6. If the project references any third-party libraries that are compiled using ASP.NET MVC 2, add the following highlighted bindingRedirect element to the Web.config file in the application root under the configuration section: <runtime>   <assemblyBinding >     <dependentAssembly>       <assemblyIdentity name="System.Web.Mvc"           publicKeyToken="31bf3856ad364e35"/>       <bindingRedirect oldVersion="2.0.0.0" newVersion="3.0.0.0"/>     </dependentAssembly>   </assemblyBinding> </runtime> Another ASP.NET MVC 3 article: - Rolling with Razor in MVC v3 Preview - Deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed - RenderAction with ASP.NET MVC 3 Sessionless Controllers

    Read the article

  • Extending the ADF Controller exception handler

    - by frank.nimphius
    The Oracle ADF controller provides a declarative option for developers to define a view activity, method activity or router activity to handle exceptions in bounded or unbounded task flows. Exception handling however is for exceptions only and not handling all types of Throwable. Furthermore, exceptions that occur during the JSF RENDER RESPONSE phase are not looked at either as it is considered too late in the cycle. For developers to try themselves to handle unhandled exceptions in ADF Controller, it is possible to extend the default exception handling, while still leveraging the declarative configuration. To add your own exception handler: · Create a Java class that extends ExceptionHandler · Create a textfile with the name “oracle.adf.view.rich.context.Exceptionhandler” (without the quotes) and store it in .adf\META-INF\services (you need to create the “services” folder) · In the file, add the absolute name of your custom exception handler class (package name and class name without the “.class” extension) For any exception you don't handle in your custom exception handler, just re-throw it for the default handler to give it a try … import oracle.adf.view.rich.context.ExceptionHandler; public class MyCustomExceptionHandler extends ExceptionHandler { public MyCustomExceptionHandler() {      super(); } public void handleException(FacesContext facesContext,                              Throwable throwable, PhaseId phaseId)                              throws Throwable {    String error_message;    error_message = throwable.getMessage();    //check error message and handle it if you can    if( … ){          //handle exception        …    }    else{       //delegate to the default ADFc exception handler        throw throwable;}    } } Note however, that it is recommended to first try and handle exceptions with the ADF Controller default exception handling mechanism. In the past, I've seen attempts on OTN to handle regular application use cases with custom exception handlers for where there was no need to override the exception handler. So don't go for this solution to quickly and always think of alternative solutions. Sometimes a try-catch-final block does it better than sophisticated web exception handling.

    Read the article

  • PRUEBAS DE ESPECIALIZACION GRATUITAS 2013

    - by agallego
    Consigue  tu Certificado de Especialista Oracle  de forma GRATUITA , 27 y 28 de Noviembre de 2013  Ahora puedes realizar los exámenes de implementación de las especializaciones de Oracle y convertirte en especialista. Podrás realizar cualquiera de los exámenes de implementación de la siguiente lista: Oracle Fusion Customer Relationship Management 11g Sales Certified Implementation Specialist (1Z0-456) Oracle Fusion Financials 11g General Ledger Implementation Specialist(1Z0-508) Oracle Fusion Financials 11g Accounts Payable Implementation Specialist(1Z0-507) Oracle Fusion Financials 11g Accounts Receivable Implementation Specialist(1Z0-506) Oracle Fusion Human Capital Management 11g Human Resources Certified Implementation Specialist (1Z0-584) Oracle Fusion Human Capital Management 11g Talent Management Certified Implementation Specialist (1Z0-585) Oracle ATG Web Commerce 10 Implementation Developer Certified Implementation Specialist (1Z0-510) Oracle Hyperion Planning 11 Essentials (1Z0-533) Oracle Business Intelligence Applications 7.9.6 for ERP Essentials (1Z0-525) Oracle Hyperion Financial Management 11 Essentials (1Z0-532) Oracle Business Intelligence Applications 7.9.6 for CRM Essentials (1Z0-524) Oracle Hyperion Data Relationship Management 11.1.2 Essentials (1Z1-588) Oracle Business Intelligence Foundation Suite 11g Essentials (1Z0-591) Oracle Essbase 11 Essentials (1Z0-531) SPARC T4-Based Server Installation Essentials (1Z0-597) Oracle Solaris 11 Installation and Configuration Essentials (1Z0-580) Sun ZFS Storage Appliance Certified Implementation Specialist Exalogic Elastic Cloud X2-2 Certified Implementation Specialist Oracle Exadata 11g Essentials (1Z0-536) Oracle VM 3 for x86 Essenstials (1Z0-590) Oracle Linux Fundamentals (1Z0-402) Oracle Linux System Administration (1Z0-403) Oracle Service-Oriented Architecture Certified Implementation Specialist (1Z0-451) Oracle Unified Business Process Management Suite 11g Certified Implementation Specialist (1Z0-560) Oracle WebLogic Server 12c Certified Implementation Specialist (1Z0-599) Oracle WebCenter Portal 11g Essentials (1Z0-541) Oracle WebCenter Content 11g Essentials (1Z0-542) Oracle Application Development Framework 11g Certified Implementation Specialist Oracle Identity Analytics 11g Certified Implementation Specialist (1Z0-545) Oracle Enterprise Manager 12c Essentials (1Z0-457) Puedes consultar la información acerca de los examenes en cada uno de los enlaces. Para prepararte los examenes sigue la Guia de estudio que encontrarás en la página de cada examen. Requisitos: ser  Partner Gold, Platinum o Diamond de Oracle y tener un usuario de Oracle Pearson Vue.  ¿Cuándo?: 27 y 28 de noviembre  a las (9:00, 12:00, 16:00)  ¿Dónde?: Core Networks, C.E.Parque Norte, Edificio Olmo, Planta 1 Serrano Galvache 56 | 28033, Madrid Para inscribirte: Create una cuenta en Pearson Vue (www.pearsonvue.com/oracle). Para Registrarte aquí. Para más información sobre el programa de especializaciones, haz clic aquí. No pierdas esta oportunidad e inscríbete hoy.  Para cualquier duda contactar con [email protected]. Ana María Gallego Partner Enablement Manager Spain and Portugal      

    Read the article

  • Error compiling GLib in Ubuntu 14.04 (trying to install GimpShop)

    - by Nicolás Salvarrey
    I'm kinda new in Linux, so please take it easy on the most complicated stuff. I'm trying to install GimpShop. Installation guide asks me to install GLib first, and when I try to compile it using the make command I get errors. When I run the ./configure --prefix=/usr command, I get this: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for the BeOS... no checking for Win32... no checking whether to enable garbage collector friendliness... no checking whether to disable memory pools... no checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for c++... no checking for g++... no checking for gcc... gcc checking whether we are using the GNU C++ compiler... no checking whether gcc accepts -g... no checking dependency style of gcc... gcc3 checking for gcc option to accept ANSI C... none needed checking for a BSD-compatible install... /usr/bin/install -c checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... no checking for _LARGE_FILES value needed for large files... no checking for pkg-config... /usr/bin/pkg-config checking for gawk... (cached) mawk checking for perl5... no checking for perl... perl checking for indent... no checking for perl... /usr/bin/perl checking for iconv_open... yes checking how to run the C preprocessor... gcc -E checking for egrep... grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking for LC_MESSAGES... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking for ngettext in libc... yes checking for dgettext in libc... yes checking for bind_textdomain_codeset... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for catalogs to be installed... am ar az be bg bn bs ca cs cy da de el en_CA en_GB eo es et eu fa fi fr ga gl gu he hi hr id is it ja ko lt lv mk mn ms nb ne nl nn no or pa pl pt pt_BR ro ru sk sl sq sr sr@ije sr@Latn sv ta tl tr uk vi wa xh yi zh_CN zh_TW checking for a sed that does not truncate output... /bin/sed checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking for g77... no checking for f77... no checking for xlf... no checking for frt... no checking for pgf77... no checking for fort77... no checking for fl32... no checking for af77... no checking for f90... no checking for xlf90... no checking for pgf90... no checking for epcf90... no checking for f95... no checking for fort... no checking for xlf95... no checking for ifc... no checking for efc... no checking for pgf95... no checking for lf95... no checking for gfortran... no checking whether we are using the GNU Fortran 77 compiler... no checking whether accepts -g... no checking the maximum length of command line arguments... 32768 checking command to parse /usr/bin/nm -B output from gcc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if gcc static flag works... yes checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC checking if gcc PIC flag -fPIC works... yes checking if gcc supports -c -o file.o... yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no configure: creating libtool appending configuration tag "CXX" to libtool appending configuration tag "F77" to libtool checking for extra flags to get ANSI library prototypes... none needed checking for extra flags for POSIX compliance... none needed checking for ANSI C header files... (cached) yes checking for vprintf... yes checking for _doprnt... no checking for working alloca.h... yes checking for alloca... yes checking for atexit... yes checking for on_exit... yes checking for char... yes checking size of char... 1 checking for short... yes checking size of short... 2 checking for long... yes checking size of long... 8 checking for int... yes checking size of int... 4 checking for void *... yes checking size of void *... 8 checking for long long... yes checking size of long long... 8 checking for __int64... no checking size of __int64... 0 checking for format to printf and scanf a guint64... %llu checking for an ANSI C-conforming const... yes checking if malloc() and friends prototypes are gmem.h compatible... no checking for growing stack pointer... yes checking for __inline... yes checking for __inline__... yes checking for inline... yes checking if inline functions in headers work... yes checking for ISO C99 varargs macros in C... yes checking for ISO C99 varargs macros in C++... no checking for GNUC varargs macros... yes checking for GNUC visibility attribute... yes checking whether byte ordering is bigendian... no checking dirent.h usability... yes checking dirent.h presence... yes checking for dirent.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking pwd.h usability... yes checking pwd.h presence... yes checking for pwd.h... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking for sys/types.h... (cached) yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/times.h usability... yes checking sys/times.h presence... yes checking for sys/times.h... yes checking for unistd.h... (cached) yes checking values.h usability... yes checking values.h presence... yes checking for values.h... yes checking for stdint.h... (cached) yes checking sched.h usability... yes checking sched.h presence... yes checking for sched.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking for nl_langinfo... yes checking for nl_langinfo and CODESET... yes checking whether we are using the GNU C Library 2.1 or newer... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for setlocale... yes checking for size_t... yes checking size of size_t... 8 checking for the appropriate definition for size_t... unsigned long checking for lstat... yes checking for strerror... yes checking for strsignal... yes checking for memmove... yes checking for mkstemp... yes checking for vsnprintf... yes checking for stpcpy... yes checking for strcasecmp... yes checking for strncasecmp... yes checking for poll... yes checking for getcwd... yes checking for nanosleep... yes checking for vasprintf... yes checking for setenv... yes checking for unsetenv... yes checking for getc_unlocked... yes checking for readlink... yes checking for symlink... yes checking for C99 vsnprintf... yes checking whether printf supports positional parameters... yes checking for signed... yes checking for long long... (cached) yes checking for long double... yes checking for wchar_t... yes checking for wint_t... yes checking for size_t... (cached) yes checking for ptrdiff_t... yes checking for inttypes.h... yes checking for stdint.h... yes checking for snprintf... yes checking for C99 snprintf... yes checking for sys_errlist... yes checking for sys_siglist... yes checking for sys_siglist declaration... yes checking for fd_set... yes, found in sys/types.h checking whether realloc (NULL,) will work... yes checking for nl_langinfo (CODESET)... yes checking for OpenBSD strlcpy/strlcat... no checking for an implementation of va_copy()... yes checking for an implementation of __va_copy()... yes checking whether va_lists can be copied by value... no checking for dlopen... no checking for NSLinkModule... no checking for dlopen in -ldl... yes checking for dlsym in -ldl... yes checking for RTLD_GLOBAL brokenness... no checking for preceeding underscore in symbols... no checking for dlerror... yes checking for the suffix of shared libraries... .so checking for gspawn implementation... gspawn.lo checking for GIOChannel implementation... giounix.lo checking for platform-dependent source... checking whether to compile timeloop... yes checking if building for some Win32 platform... no checking for thread implementation... posix checking thread related cflags... -pthread checking for sched_get_priority_min... yes checking thread related libraries... -pthread checking for localtime_r... yes checking for posix getpwuid_r... yes checking size of pthread_t... 8 checking for pthread_attr_setstacksize... yes checking for minimal/maximal thread priority... sched_get_priority_min(SCHED_OTHER)/sched_get_priority_max(SCHED_OTHER) checking for pthread_setschedparam... yes checking for posix yield function... sched_yield checking size of pthread_mutex_t... 40 checking byte contents of PTHREAD_MUTEX_INITIALIZER... 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 checking whether to use assembler code for atomic operations... x86_64 checking value of POLLIN... 1 checking value of POLLOUT... 4 checking value of POLLPRI... 2 checking value of POLLERR... 8 checking value of POLLHUP... 16 checking value of POLLNVAL... 32 checking for EILSEQ... yes configure: creating ./config.status config.status: creating glib-2.0.pc config.status: creating glib-2.0-uninstalled.pc config.status: creating gmodule-2.0.pc config.status: creating gmodule-no-export-2.0.pc config.status: creating gmodule-2.0-uninstalled.pc config.status: creating gthread-2.0.pc config.status: creating gthread-2.0-uninstalled.pc config.status: creating gobject-2.0.pc config.status: creating gobject-2.0-uninstalled.pc config.status: creating glib-zip config.status: creating glib-gettextize config.status: creating Makefile config.status: creating build/Makefile config.status: creating build/win32/Makefile config.status: creating build/win32/dirent/Makefile config.status: creating glib/Makefile config.status: creating glib/libcharset/Makefile config.status: creating glib/gnulib/Makefile config.status: creating gmodule/Makefile config.status: creating gmodule/gmoduleconf.h config.status: creating gobject/Makefile config.status: creating gobject/glib-mkenums config.status: creating gthread/Makefile config.status: creating po/Makefile.in config.status: creating docs/Makefile config.status: creating docs/reference/Makefile config.status: creating docs/reference/glib/Makefile config.status: creating docs/reference/glib/version.xml config.status: creating docs/reference/gobject/Makefile config.status: creating docs/reference/gobject/version.xml config.status: creating tests/Makefile config.status: creating tests/gobject/Makefile config.status: creating m4macros/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands config.status: executing default-1 commands config.status: executing glibconfig.h commands config.status: glibconfig.h is unchanged config.status: executing chmod-scripts commands nsalvarrey@Delleuze:~/glib-2.6.3$ ^C nsalvarrey@Delleuze:~/glib-2.6.3$ And then, with the make command, I get this: galias.h:83:39: error: 'g_ascii_digit_value' aliased to undefined symbol 'IA__g_ascii_digit_value' extern __typeof (g_ascii_digit_value) g_ascii_digit_value __attribute((alias("IA__g_ascii_digit_value"), visibility("default"))); ^ In file included from garray.c:35:0: galias.h:31:35: error: 'g_allocator_new' aliased to undefined symbol 'IA__g_allocator_new' extern __typeof (g_allocator_new) g_allocator_new __attribute((alias("IA__g_allocator_new"), visibility("default"))); ^ make[4]: *** [garray.lo] Error 1 make[4]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[3]: *** [all-recursive] Error 1 make[3]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[2]: *** [all] Error 2 make[2]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[1]: *** [all-recursive] Error 1 make[1]: se sale del directorio «/home/nsalvarrey/glib-2.6.3» make: *** [all] Error 2 nsalvarrey@Delleuze:~/glib-2.6.3$ (it's actually a lot longer) Can somebody help me?

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • apt-get broken + dependencies issue + many software uninstalled

    - by vnc786
    OS=ubuntu 12.04 64bit 3.2.0-29-generic what i did? apt-get purge libre* which i thought will remove LO 3.5 but, which was a big mistake after couple of moments, i realise that its is removing all other software so i did cltl-c which stopped the apt process i restarted my system after that i found that following software were missing apt-get,evince(pdf), cheese etc..here is full list http://pastebin.com/CWHrw10y i managed to install APT deb file through dpkg but now i am not able to do any installation so i just removed /var/lib/dpkg/status and created new but that didnt help apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: apt-utils coreutils debconf debconf-i18n dpkg libacl1 libapt-inst1.4 libapt-pkg4.12 libattr1 libbz2-1.0 libdb5.1 libgcc1 liblocale-gettext-perl liblzma5 libselinux1 libstdc++6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl perl-base tar tzdata xz-utils zlib1g Suggested packages: debconf-doc debconf-utils whiptail dialog gnome-utils libterm-readline-gnu-perl libgtk2-perl libnet-ldap-perl libqtgui4-perl libqtcore4-perl apt bzip2 ncompress xz-lzma The following NEW packages will be installed: apt-utils coreutils debconf debconf-i18n dpkg libacl1 libapt-inst1.4 libapt-pkg4.12 libattr1 libbz2-1.0 libdb5.1 libgcc1 liblocale-gettext-perl liblzma5 libselinux1 libstdc++6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl perl-base tar tzdata xz-utils zlib1g 0 upgraded, 24 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/9,246 kB of archives. After this operation, 29.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y E: Cannot get debconf version. Is debconf installed? debconf: apt-extracttemplates failed: No such file or directory dpkg: regarding .../libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb containing libgcc1, pre-dependency problem: libgcc1 pre-depends on multiarch-support multiarch-support is unpacked, but has never been configured. dpkg: error processing /var/cache/apt/archives/libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb (--unpack): pre-dependency problem - not installing libgcc1 No apport report written because MaxReports is reached already Errors were encountered while processing: /var/cache/apt/archives/libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb E: Internal Error, No file name for libc6 W: Could not perform immediate configuration on 'multiarch-support:amd64'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2) E: Sub-process /usr/bin/dpkg returned an error code (1) # apt-get -u dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libc6 : Depends: libgcc1 but it is not installed Depends: tzdata but it is not installed E: Unmet dependencies. Try using -f. i have tried below link Unable to install due to debconf problem How do I resolve unmet dependencies? but no help... thanks

    Read the article

  • links for 2011-02-16

    - by Bob Rhubart
    On the Software Architect Trail Software architect is the #1 job, according to a 2010 CNN-Money poll. In this article in Oracle Magazine, several members of the OTN architect community talk about the career paths that led them to this lucrative role.  (tags: oracle oraclemagazine softwarearchitect) Oracle Technology Network Architect Day: Denver Registration opens soon for this event to be held in Denver on March 23, 2011.  (tags: oracle otn entarch) How the Internet Gets Inside Us : The New Yorker "It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with." - Adam Gopnik (tags: internet progress technology innovation) The Insider Threat: Understand and Mitigate Your Risks: CSO Webcast February 23, 2011 at 1:00 PM EST/ 10:00 AM PST .  Speakers: Randy Trzeciak, lead for the CERT Insider Threat research team, and  Roxana Bradescu, Director of Database Security at Oracle. (tags: oracle CERT security) The Tom Kyte Blog: An Interesting Read... Tom looks at "an internet security firm brought down by not following the most *basic* of security principals." (tags: security oracle) Jason Williamson: Oracle as a Service in the Cloud "It is not trivial to migrate large amounts of pre-relational or 'devolved' relational data. To do this, we again must revert back to a tight roadmap to migration and leverage the growing tools and services that we have." - Jason Williamson (tags: oracle cloud soa) Edwin Biemond: Java / Oracle SOA blog: Building an asynchronous web service with JAX-WS "Building an asynchronous web service can be complex especially when you are used to synchronous Web services where you can wait for the response in your favorite tool." - Oracle ACE Edwin Biemond (tags: oracle oracleace java soa) Shared Database Servers (The SaaS Report) "Outside the virtualization world, there are capabilities of Oracle Database which can be used to prevent resource contention and guarantee SLA." - Shivanshu Upadhyay (tags: oracle database cloud SaaS) White Paper: Experiencing the New Social Enterprise "Increasingly organizations recognize the mandate to create a modern user experience that transforms existing business processes and increases business efficiency and agility." (tags: e20 enterprise2.0 socialcomputing oracle) Clusterware 11gR2 - Setting up an Active/Passive failover configuration Gilles Haro illustrates the steps necessary to achieve "a fully operational 11gR2 database protected by automatic failover capabilities." (tags: oracle clusterware) Oracle ERP: How to overcome local hurdles in a global implementation "The corporate world becomes a global village as many companies expand their business and offices around different countries and even continents. And this number keeps increasing. This globalization raises interesting questions..." - Jan Verhallen (tags: oracle capgemini entarch erp) Webcast: Successful Strategies for Optimizing Your Data Warehouse. March 3. 10 a.m. PT/1 p.m. ET Thursday, March 3, 2011. 10 a.m. PT/1 p.m. ET. Speakers: Mala Narasimharajan (Senior Product Marketing Manager, Oracle Data Integration) and Denis Gray (Principal Product Manager, Oracle Data Integration) (tags: oracle dataintegration datawarehousing)

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • Cluster Node Recovery Using Second Node in Solaris Cluster

    - by Onur Bingul
    Assumptions:Node 0a is the cluster node that has crashed and could not boot anymore.Node 0b is the node in cluster and in production with services active.Both nodes have their boot disk mirrored via SDS/SVM.We have many options to clone the boot disk from node 0b:- make a copy via network using the ufsdump command and pipe to ufsrestore - make a copy inserting the disk locally on node 0b and creating the third mirror with SDS- make a copy inserting the disk locally on node 0b using dd commandIn this procedure we are going to use dd command (from my experience this is the best option).Bare in mind that in the examples provided we work on Sun Fire V240 systems which have SCSI internal disks. In the case of Fibre Channel (FC) internal disks you must pay attention to the unique identifier, or World Wide Name (WWN), associated with each FC disk (in this case take a look at infodoc #40133 in order to recreate the device tree correctly).Procedure:On node 0b the boot disk is c1t0d0 (c1t1d0 mirror) and this is the VTOC:* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory      0      2    00          0   2106432   2106431      1      3    01    2106432  74630784  76737215      2      5    00          0 143349312 143349311      4      7    00   76737216  50340672 127077887      5      4    00  127077888  14683968 141761855      6      0    00  141761856   1058304 142820159      7      0    00  142820160    529152 143349311We will insert the new disk on node 0b and it will be seen as c1t2d0.1) On node 0b we make a copy via dd from disk c1t0d0s2 to disk c1t2d0s2# dd if=/dev/rdsk/c1t0d0s2 of=/dev/rdsk/c1t2d0s2 bs=8192kA copy of a 72GB disk will take approximately about 45 minutes.Note: as an alternative to make identical copy of root over network follow Document ID: 47498Title: Sun[TM] Cluster 3.0: How to Rebuild a node with Veritas Volume Manager2) Perform an fsck on disk c1t2d0 data slices:   1.  fsck -o f /dev/rdsk/c1t2d0s0 (root)   2.  fsck -o f /dev/rdsk/c1t2d0s4 (/var)   3.  fsck -o f /dev/rdsk/c1t2d0s5 (/usr)   4.  fsck -o f /dev/rdsk/c1t2d0s6 (/globaldevices)3) Mount the root file system in order to edit following files for changing the node name:# mount /dev/dsk/c1t2d0s0 /mntChange the hostname from 0b to 0a:# cd /mnt/etc# vi hosts # vi hostname.bge0 # vi hostname.bge2 # vi nodename 4) Change the /mnt/etc/vfstab from the actual:/dev/md/dsk/d201        -       -       swap    -       no      -/dev/md/dsk/d200        /dev/md/rdsk/d200       /       ufs     1       no      -/dev/md/dsk/d205        /dev/md/rdsk/d205       /usr    ufs     1       no      logging/dev/md/dsk/d204        /dev/md/rdsk/d204       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d206        /dev/md/rdsk/d206       /global/.devices/node@2 ufs     2       noglobalto this (unencapsulate disk from SDS/SVM):/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs     2       no globalIt is important that global device partition (slice 6) in the new vfstab will point to the physical partition of the disk (in our case slice 6).Be careful with the name you use for the new disk. In this case we define it as c1t0d0 because we will insert it as target 0 in node 0a.But this could be different based on the configuration you are working on.5) Remove following entry from /mnt/etc/system (part of unencapsulation procedure):rootdev:/pseudo/md@0:0,200,blk6) Correct the link shared -> ../../global/.devices/node@2/dev/md/shared in order to point to the nodeid of node 0a (in our case nodeid 1):# cd /mnt/dev/mdhow it is now.... node 0b has nodeid 2lrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@2/dev/md/shared# rm shared# ln -s ../../global/.devices/node@1/dev/md/shared sharedhow is going to be... with nodeid 1 for node 0alrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@1/dev/md/shared7) Change nodeid (in our case from 2 to 1):# cd /mnt/etc/cluster# vi nodeid8) Change the file /mnt/etc/path_to_inst in order to reflect the correct nodeid for node 0a:# cd /mnt/etc# vi path_to_instChange entries from node@2 to node@1 with the vi command ":%s/node@2/node@1/g"9) Write the bootblock to the disk... just in case:# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0Now the disk is ready to be inserted in node 0a in order to bootup the node.10) Bootup node 0a with command "boot -sx"... this is becasue we need to make some changes in ccr files in order to recreate did environment.11) Modify cluster ccr:# cd /etc/cluster/ccr# rm did_instances# rm did_instances.bak# vi directory - remove the did_instances line.# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/directory # grep ccr_gennum /etc/cluster/ccr/directory ccr_gennum -1 # /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure # grep ccr_gennum /etc/cluster/ccr/infrastructure ccr_gennum -112) Bring the node 0a down again to the ok prompt and then issue the command "boot -r"Now the node will join the cluster and from scstat and metaset command you can verify functionality. Next step is to encapsulate the boot disk in SDS/SVM and create the mirrors.In our case node 0b has metadevice name starting from d200. For this reason on node 0a we need to create metadevice starting from d100. This is just an example, you can have different names.The important thing to remember is that metadevice boot disks have different names on each node.13) Remove metadevice pointing to the boot and mirror disks (inherit from node 0b):# metaclear -r -f d200# metaclear -r -f d201# metaclear -r -f d204# metaclear -r -f d205# metaclear -r -f d206verify from metastat that no metadevices are set for boot and mirror disks.14) Encapsulate the boot disk:# metainit -f d110 1 1 c1t0d0s0# metainit d100 -m d110# metaroot d10015) Reboot node 0a.16) Create all the metadevice for slices remaining on boot disk# metainit -f d111 1 1 c1t0d0s1# metainit d101 -m d111# metainit -f d114 1 1 c1t0d0s4# metainit d104 -m d114# metainit -f d115 1 1 c1t0d0s5# metainit d105 -m d115# metainit -f d116 1 1 c1t0d0s6# metainit d106 -m d11617) Edit the vfstab in order to specifiy metadevices created:old:/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs      2       no  globalnew:/dev/md/dsk/d101        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/md/dsk/d105        /dev/md/rdsk/d105       /usr    ufs     1       no      logging/dev/md/dsk/d104        /dev/md/rdsk/d104       /var    ufs     1       no      logging#/dev/md/dsk/106       /dev/md/rdsk/d106       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d106        /dev/md/rdsk/d106       /global/.devices/node@1 ufs     2       noglobal18) Reboot node 0a in order to check new SDS/SVM boot configuration.19) Label the mirror disk c1t1d0 with the VTOC of boot disk c1t0d0:# prtvtoc /dev/dsk/c1t0d0s2 > /var/tmp/VTOC_c1t0d0 # fmthard -s /var/tmp/VTOC_c1t0d0 /dev/rdsk/c1t1d0s220) Put DB replica on slice 7 of disk c1t1d0:# metadb -a -c 3 /dev/dsk/c1t1d0s721) Create metadevice for mirror disk c1t1d0 and attach the new mirror side:# metainit d120 1 1 c1t1d0s0# metattach d100 d120# metainit d121 1 1 c1t1d0s1# metattach d101 d121# metainit d124 1 1 c1t1d0s4# metattach d104 d124# metainit d125 1 1 c1t1d0s5# metattach d105 d125# metainit d126 1 1 c1t1d0s6# metattach d106 d126

    Read the article

  • GPL vs plugin interfaces not designed with a specific application in mind

    - by Kristóf Marussy
    I am not seeking or in need of legal advice, but an interesting though experiment came to my mind. Imagine the following situtation (I cannot really think about a concrete example and I am unsure if a real manifestation even exists): there is a free (libre) api A licensed under some permissive license or even LGPL. Non-free application B implements this api in order host plugins, but there are other free software doing the same thing. Moreover, there is plugin C acting as a plugin under api A. It links to library D, that is under GPL, so C is also under GPL. Plugins using A are loaded into hosts via a dlopen-like mechanism and use complex data structure for host-plugin communication. Neither B nor C distribute any files that may be required for A to function properly (like headers containing the structure definitions of A or dynamic libraries containing helper functions for A written by the authors of A), but such things may exist. Now some user installs application B and plugin C on his machine, along with anything that may be required for api A to function properly. Then he proceeds and loads C into B and creates some intellectual property with B which is not a piece of software. Did a GPL violation happend at some point, and if so, who violated GPL and why? The authors of C violate D's license by making C possible to be used in non-free host B? This is a possibility because they can't give and exception of GPL (like one described in http://www.gnu.org/licenses/gpl-faq.html#GPLPluginsInNF or http://www.gnu.org/licenses/gpl-faq.html#LinkingOverControlledInterface) due to D's license terms. The authors of B violate C's and D's license by making C possible to be loaded in B? This is a possibility because http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins disallows the mechanisms A uses for communitation between the free and non-free modules. The authors of A, because the api may be used (and in this case, was used) for communication between GPL'd and non-free software. This would be extremely absurd. The user, because at the moment of loading B into C, he made a derived work of C. I think this is impossible, because he does not distribute it. But would the situation change is he decided to release a configuration file of B which makes B load C as a plugin? Nobody, because A counts as a 'system library', and both B and C directly interact only with A, not eachother. In a sane world, this would happen... A concrete example of A could be some kind of audio (think LADSPA) or image processing api. However, I could find no such interface (that is free software, generic and is also implemented by commercial tools). A real-world example could also be quite enlightening.

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • MythTV lost recordings - "No recordings available" and no recording rules either

    - by nimasmi
    I have a c.6 year old mythtv database. I recently upgraded from Ubuntu 10.04 to 12.04. This brought a MythTV upgrade from 0.24 to 0.25, which went well. Today, all my recordings have disappeared. They still exist in the /var/lib/mythtv/recordings folder, and the 'M' key in the Watch Recordings page says that there are 201 recordings available somewhere, but they will not display. See screenshot: (implicit thanks to whomever upvoted this, giving me sufficient reputation to upload images) Changing the filter does not remedy the fact that there is nothing shown in the lists. My Upcoming Recordings screen says that there are no rules set, but my list of previously recorded shows is still there, and has an entry from as recently as 3am today. mythbackend --printsched gives the following: user@box:~$ mythbackend --printsched 2012-09-22 12:59:20.537008 C mythbackend version: fixes/0.25 [v0.25.2-15-g46cab93] www.mythtv.org 2012-09-22 12:59:20.537043 C Qt version: compile: 4.8.1, runtime: 4.8.1 2012-09-22 12:59:20.537048 N Enabled verbose msgs: general 2012-09-22 12:59:20.537076 N Setting Log Level to LOG_INFO 2012-09-22 12:59:20.537142 I Added logging to the console 2012-09-22 12:59:20.537152 I Added database logging to table logging 2012-09-22 12:59:20.537279 N Setting up SIGHUP handler 2012-09-22 12:59:20.537373 N Using runtime prefix = /usr 2012-09-22 12:59:20.537394 N Using configuration directory = /home/user/.mythtv 2012-09-22 12:59:20.537999 I Assumed character encoding: en_GB.UTF-8 2012-09-22 12:59:20.538599 N Empty LocalHostName. 2012-09-22 12:59:20.538610 I Using localhost value of box 2012-09-22 12:59:20.538792 I Testing network connectivity to '192.168.1.2' 2012-09-22 12:59:20.539420 I Starting process manager 2012-09-22 12:59:20.541412 I Starting IO manager (read) 2012-09-22 12:59:20.541715 I Starting IO manager (write) 2012-09-22 12:59:20.541836 I Starting process signal handler 2012-09-22 12:59:20.684497 N Setting QT default locale to EN_GB 2012-09-22 12:59:20.684694 I Current locale EN_GB 2012-09-22 12:59:20.684813 N Reading locale defaults from /usr/share/mythtv//locales/en_gb.xml 2012-09-22 12:59:20.697623 I New static DB connectionDataDirectCon 2012-09-22 12:59:20.704769 I MythCoreContext: Connecting to backend server: 192.168.1.2:6543 (try 1 of 1) Calculating Schedule from database. Inputs, Card IDs, and Conflict info may be invalid if you have multiple tuners. 2012-09-22 12:59:27.710538 E MythSocket(21dfcd0:14): readStringList: Error, timed out after 7000 ms. 2012-09-22 12:59:27.710592 C Protocol version check failure. The response to MYTH_PROTO_VERSION was empty. This happens when the backend is too busy to respond, or has deadlocked in due to bugs or hardware failure. Things I have tried so far: restart the backend restart the frontend run mythtv-setup and check database passwords and IP addresses change the frontend setting for backend IP from localhost to 192.168.1.2 (the backend/frontend's IP) run optimize_mythdb.pl Other suggestions appreciated.

    Read the article

  • Extensible Metadata in Oracle IRM 11g

    - by martin.abrahams
    Another significant change in Oracle IRM 11g is that we now use XML to create the tamperproof header for each sealed document. This article explains what this means, and what benefit it offers. So, every sealed file has a metadata header that contains information about the document - its classification, its format, the user who sealed it, the name and URL of the IRM Server, and much more. The IRM Desktop and other IRM applications use this information to formulate the request for rights, as well as to enhance the user experience by exposing some of the metadata in the user interface. For example, in Windows explorer you can see some metadata exposed as properties of a sealed file and in the mouse-over tooltip. The following image shows 10g and 11g metadata side by side. As you can see, the 11g metadata is written as XML as opposed to the simple delimited text format used in 10g. So why does this matter? The key benefit of using XML is that it creates the opportunity for sealing applications to use custom metadata. This in turn creates the opportunity for custom classification models to be defined and enforced. Out of the box, the solution uses the context classification model, in which two particular pieces of metadata form the basis of rights evaluation - the context name and the document's item code. But a custom sealing application could use some other model entirely, enabling rights decisions to be evaluated on some other basis. The integration with Oracle Beehive is a great example of this. When a user adds a document to a Beehive workspace, that document can be automatically sealed with metadata that represents the Beehive security model rather than the context model. As a consequence, IRM can enforce the Beehive security model precisely and all rights configuration can actually be managed through the Beehive UI rather than the IRM UI. In this scenario, IRM simply supports the Beehive application, seamlessly extending Beehive security to all copies of workspace documents without any additional administration. Finally, I mentioned that the metadata header is tamperproof. This is obviously to stop a rogue user modifying the metadata with a view to gaining unauthorised access - reclassifying a board document to a less sensitive classifcation, for example. To prevent this, the header is digitally signed and can only be manipulated by a suitably authorised sealing application.

    Read the article

  • ZTE USB Modem AC2736, connection not possible in Ubuntu 12.04.1 LTS

    - by Fredo
    It's a long post but nearly covers all my experiments and changes I did to my NM. Hope the information is complete and if there are still question, more information can be provided. I've a ZTE AC2736 USB Modem (CDMA modem) which worked fine in Ubuntu 10.04 /11.10. I recently switched to 12.04.1 (precise pangolin). after the switch the first issue I faced was to connect to internet using my USB modem (ISP: Reliance Brand: Netconnect). Tried to run the drivers provided by Reliance but they are old and won't support Kernel 2.6.30 above. since the code was not downloaded with ISO image (of 12.04) i couldn't compile the files provided in such driver. lsusb does detect it as Modem with output similar to 19d2:fff1 ZTE CDMA technologies inc. (or similar as i didn't note it down) If it is detected as USB storage it shows 19d2:fff5 (as per few online forums, i may be wrong here). I used the network Manager and configured the modem to dial #777 (default) and the ISP provided username:password combination. It tries to connect to internet (3-4 times automatically)but fails to get online. once I was able to connect in the monring hours and the message was flashed 'registered on CDMA home network'. I was able to run an update. the kernel was updated with 3.0.2 -pae OR something similar (can find out if required). I surfed the net for about 2 hours later before restarting. After the restart, again the Modem was not able to get me online. I kept trying for many times. I tried changing the setting in NM. One evening after dark I was able to connect to network with same message flash 'registered on CDMA home network' (the message was similar, i'm not precise here,sorry). I was able to surf the net for nearly 3 hours before I switched off my Laptop. I'm not able to get online after that day, It's been 3 days now. I'll try the observered theory of late/early hours sometime soon as mentioned below. Laptop configuration : Make: Lenovo Model: B480 Processor: Intel B950 RAM: 4G DDR3 HDD: 500 G Broadcom Wireless/Bluetooth/Ethernet LAN OS: FreeDOS / Ubuntu 12.04 LTS (dualboot) Kernel 3.0.2-pae Obeservation : I'm able to connect to internet in those hours when generally the speed is high (low usage by other network (wireless) users). like in early mornings or late nights. This is strange as connection should not be dependent on bandwidth usage. Any help would be appraciated to fix this issue. before I decide to cahnge the OS or ISP.

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >