Search Results

Search found 13653 results on 547 pages for 'old school'.

Page 405/547 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • links for 2011-03-17

    - by Bob Rhubart
    Siba Prasad: Oracle Database on Amazon RDSg Siba Prasad share an analysis of the pros and cons. (tags: oracle database cloud amazon) LIVE WEBCAST March 24 2pm PT- Why Switch from Red Hat and SUSE Linux to Oracle Linux? (Oracle's Linux Blog) Featuring Oracle's Monica Kumar, Sr.Director of Linux, Oracle VM and MySQL and Avi Miller, Principal Sales Consultant, Linux and Virtualization. (tags: oracle linux) Webcast: IBM SOA vs. Oracle SOA, March 24, 1pm ET / 10am PT Maneesh Joshi and Bruce Tierney guide you to a solid understanding of the differences between the Oracle and IBM approach to comprehensive SOA. (tags: oracle soa bpm) Finding the Right Solution to Source and Manage Your Contractors (PeopleSoft Apps Strategy) "Talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services." - Mark Rosenberg (tags: oracle peoplesoft enterprisearchitecture) Oracle Business Intelligence Customers: Have Your Voice Heard in the "2011Wisdom of the Crowds Business Intelligence Market Survey" (BI & Analytics Pulse) "The Wisdom of the Crowds survey combines social media, crowd sourcing, and good old fashioned market research to provide vendors and customers alike an unvarnished and insightful snap shot of what's top of mind with business intelligence professionals." (tags: oracle businessintelligence) Martin Bach: Troubleshooting Grid Infrastructure startup Martin Bach hunts down the problem that caused one of his blades to reboot after an EXT3 journal error. (tags: oracle grid rac) Oracle WebCenter: Social Networking & Collaboration (Oracle Enterprise 2.0 Blog) Kelley Ruppel with information on "how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration." (tags: oracle webcenter enterprise2.0 collaboration) VirtaThon: 100% Virtual Java/Oracle/MySQL Conference! | Bex Huff "The goal is simple," says Oracle ACE Director Bex Huff. "Because it's all online, the conference is very cheap. Pricing is not yet announced... but it should be around $300. Also, unlike other conferences, every speaker gets paid a small fee depending on the popularity of his or her session." (tags: oracle oracleace java mysqql) Griffiths Waite Blog: BPM 11g PS3 GW's Ian Heathcock shares a link to "a most interesting article on Oracle's recent release discussing the new features and how PS3 adds value  to the whole SOA message." (tags: oracle soa) The Buttso Blathers: Tutorial: JSF 2.0 and JPA 2.0 with WebLogic Server using NetBeans Should you take application architecture advice from a man named Buttso? In this case, yes. (tags: oracle jsf jpa weblogic) Setting-up a High Available Tuned SOA Environment Middleware Magic (tags: ping.fm) How to Configure Weblogic Messaging Bridge with JBoss Middleware Magic (tags: ping.fm Weblogic JBoss) Richard Veryard on Architecture: Emergent Architecture (tags: ping.fm entarch emergence)

    Read the article

  • Can't get Wireless RT2x00usb driver to work, and can't blacklist it

    - by TheLQ
    After a two year hiatus to Linux, I try it again out again. And then I run into to driver issues... I have an old Linksys WUSB54G v4 Wireless USB Adapter. In previous versions I had to use a combination of Ndiswrapper and Wicd to hope of getting it working. In 10.10, apparently there are built in drivers for it. Unfortunately they don't work. Fails to connect to my WPA network, fails to connect to my open unencrypted network. Wicd fails at "Obtaining IP address" or when using static IPs fails at verifying connectivity to network. Getting fed up I tried the ndiswrapper approach. Installed and configured, but still not working, even when blacklisting the rt2570 module. So for some debugging I added some lines to my /etc/modprobe.d/blacklist.conf file blacklist rt2570 blacklist prism54usb blacklist rt2x00lib blacklist rt2x00usb Restart and find this: lordquackstar@quackbeast:/etc/modprobe.d$ lsmod | grep rt2 rt2500usb 18049 0 rt2x00usb 9779 1 rt2500usb rt2x00lib 27275 2 rt2500usb,rt2x00usb led_class 2633 1 rt2x00lib mac80211 231541 2 rt2x00usb,rt2x00lib cfg80211 144470 2 rt2x00lib,mac80211 Seems to be ignored... Tried this: lordquackstar@quackbeast:/etc/modprobe.d$ sudo rmmod -f rt2x00usb ERROR: Removing 'rt2x00usb': Resource temporarily unavailable lordquackstar@quackbeast:/etc/modprobe.d$ sudo rmmod -f rt2x00lib ERROR: Removing 'rt2x00lib': Resource temporarily unavailable and couldn't connect. Restarted and was back to the same modules loading. Maybe there's something in the log: lordquackstar@quackbeast:/etc/modprobe.d$ tail -n100000 /var/log/syslog | grep rt2 Dec 13 19:01:15 quackbeast kernel: [ 23.698056] Registered led device: rt2500usb-phy0::radio Dec 13 19:01:15 quackbeast kernel: [ 23.698140] Registered led device: rt2500usb-phy0::quality Dec 13 19:01:15 quackbeast kernel: [ 23.701680] usbcore: registered new interface driver rt2500usb Dec 13 19:01:15 quackbeast NetworkManager[855]: <info> (wlan0): new 802.11 WiFi device (driver: 'rt2500usb' ifindex: 4) Dec 13 19:17:47 quackbeast kernel: [ 23.521759] Registered led device: rt2500usb-phy0::radio Dec 13 19:17:47 quackbeast kernel: [ 23.521824] Registered led device: rt2500usb-phy0::quality Dec 13 19:17:47 quackbeast kernel: [ 23.524740] usbcore: registered new interface driver rt2500usb Dec 13 19:17:47 quackbeast NetworkManager[798]: <info> (wlan0): new 802.11 WiFi device (driver: 'rt2500usb' ifindex: 4) Seems to be autoloading. So this means that even if I pull it out, remove the module, and get it working, it still won't work when its plugged in all the time. More info: lordquackstar@quackbeast:/etc/modprobe.d$ sudo lshw -C Network *SNIP* *-network description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan0 serial: 00:12:17:9b:f3:1e capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=2.6.35-24-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg USB: lordquackstar@quackbeast:/etc/modprobe.d$ lsusb | grep -i rt Bus 001 Device 003: ID 13b1:000d Linksys WUSB54G v4 802.11g Adapter [Ralink RT2500USB] Any suggestions on how to either fix the rt2x00usb driver or permanently block it from loading? Note that I already have ndiswrapper installed

    Read the article

  • Radeon HD 2000, 3000, 4000 on 12.10 Quantal: fglrx (legacy) 12.6 unsupported, what to do?

    - by Andrew Mao
    After upgrading to 12.10 quantal, the packaged version of fglrx no longer works. I discovered that this is because there is a separate 'legacy' fglrx driver for the HD 2k-4k series cards, but it is incompatible with the xorg server on 12.10. This is the most current version of the driver for HD 2000 through HD 4000 series cards. You can't use the non-legacy fglrx driver, but you can use the open-source radeon driver if you prefer your WM compositing to be laggy and your YouTube videos to play like they would on a Pentium MMX series: http://support.amd.com/us/kbarticles/Pages/catalyst126legacyproducts.aspx Usually this driver can be installed in the following way, necessary because apt-get install fglrx would pull in the non-legacy driver: wget http://www2.ati.com/drivers/legacy/amd-driver-installer-12.6-legacy-x86.x86_64.zip unzip amd-driver-installer-* sudo sh ./amd-driver-installer-*.run --buildpkg Ubuntu/quantal sudo dpkg -i fglrx*.deb sudo aticonfig --initial -f If you use a different version of fglrx (for example, a newer 12.9 that doesn't support those cards) then the final command will give you an error no supported hardware detected or something similar. However, everything works at this point and you will get a reasonable xorg.conf: ... other stuff Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:5:0" EndSection ... other stuff At this point you're supposed to reboot and everything will be working with the fglrx driver. However, upon rebooting, you'll be treated to the following errors in Xorg.0.log when fglrx attempts to load: (EE) Failed to load /usr/lib/xorg/modules/drivers/fglrx_drv.so: /usr/lib/xorg/modules/drivers/fglrx_drv.so: undefined symbol: noXFree86DRIExtension Some searching around will show that this is a problem with the legacy ATI drivers not supporting xserver 1.13 or newer. (Arch Linux thread) ATI has released a fixed driver for its most recent (HD 5000 series or later) cards, but not for the 'legacy' cards yet. The non-legacy ATI drivers can't be used with the old cards. What should an Ubuntu user, using one of these HD 2000-4000 series cards, do? Wait for an updated 'legacy' ATI driver that properly works with xserver 1.13? Downgrade back to 12.04 Precise, which uses xserver 1.11? Try to downgrade xserver on 12.10 Quantal to 1.12, which could possibly break Unity and GNOME? Forced upgrade to HD 5000 series or later card? (Not possible with integrated graphics...) Some other 1337 action that fixes this problem painlessly?

    Read the article

  • The Joy Of Hex

    - by Jim Giercyk
    While working on a mainframe integration project, it occurred to me that some basic computer concepts are slipping into obscurity. For example, just about anyone can tell you that a 64-bit processor is faster than a 32-bit processer. A grade school child could tell you that a computer “speaks” in ‘1’s and ‘0’s. Some people can even tell you that there are 8 bits in a byte. However, I have found that even the most seasoned developers often can’t explain the theory behind those statements. That is not a knock on programmers; in the age of IntelliSense, what reason do we have to work with data at the bit level? Many computer theory classes treat bit-level programming as a thing of the past, no longer necessary now that storage space is plentiful. The trouble with that mindset is that the world is full of legacy systems that run programs written in the 1970’s.  Today our jobs require us to extract data from those systems, regardless of the format, and that often involves low-level programming. Because it seems knowledge of the low-level concepts is waning in recent times, I thought a review would be in order.       CHARACTER: See Spot Run HEX: 53 65 65 20 53 70 6F 74 20 52 75 6E DECIMAL: 83 101 101 32 83 112 111 116 32 82 117 110 BINARY: 01010011 01100101 01100101 00100000 01010011 01110000 01101111 01110100 00100000 01010010 01110101 01101110 In this example, I have broken down the words “See Spot Run” to a level computers can understand – machine language.     CHARACTER:  The character level is what is rendered by the computer.  A “Character Set” or “Code Page” contains 256 characters, both printable and unprintable.  Each character represents 1 BYTE of data.  For example, the character string “See Spot Run” is 12 Bytes long, exclusive of the quotation marks.  Remember, a SPACE is an unprintable character, but it still requires a byte.  In the example I have used the default Windows character set, ASCII, which you can see here:  http://www.asciitable.com/ HEX:  Hex is short for hexadecimal, or Base 16.  Humans are comfortable thinking in base ten, perhaps because they have 10 fingers and 10 toes; fingers and toes are called digits, so it’s not much of a stretch.  Computers think in Base 16, with numeric values ranging from zero to fifteen, or 0 – F.  Each decimal place has a possible 16 values as opposed to a possible 10 values in base 10.  Therefore, the number 10 in Hex is equal to the number 16 in Decimal.  DECIMAL:  The Decimal conversion is strictly for us humans to use for calculations and conversions.  It is much easier for us humans to calculate that [30 – 10 = 20] in decimal than it is for us to calculate [1E – A = 14] in Hex.  In the old days, an error in a program could be found by determining the displacement from the entry point of a module.  Since those values were dumped from the computers head, they were in hex. A programmer needed to convert them to decimal, do the equation and convert back to hex.  This gets into relative and absolute addressing, a topic for another day.  BINARY:  Binary, or machine code, is where any value can be expressed in 1s and 0s.  It is really Base 2, because each decimal place can have a possibility of only 2 characters, a 1 or a 0.  In Binary, the number 10 is equal to the number 2 in decimal. Why only 1s and 0s?  Very simply, computers are made up of lots and lots of transistors which at any given moment can be ON ( 1 ) or OFF ( 0 ).  Each transistor is a bit, and the order that the transistors fire (or not fire) is what distinguishes one value from  another in the computers head (or CPU).  Consider 32 bit vs 64 bit processing…..a 64 bit processor has the capability to read 64 transistors at a time.  A 32 bit processor can only read half as many at a time, so in theory the 64 bit processor should be much faster.  There are many more factors involved in CPU performance, but that is the fundamental difference.    DECIMAL HEX BINARY 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111   Remember that each character is a BYTE, there are 2 HEX characters in a byte (called nibbles) and 8 BITS in a byte.  I hope you enjoyed reading about the theory of data processing.  This is just a high-level explanation, and there is much more to be learned.  It is safe to say that, no matter how advanced our programming languages and visual studios become, they are nothing more than a way to interpret bits and bytes.  There is nothing like the joy of hex to get the mind racing.

    Read the article

  • How To: Spell Check InfoPath web form in SharePoint 2010

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/11/07/how-to-spell-check-infopath-web-form-in-sharepoint-2010.aspxThis is a sequel to my 2011 post about How To: Spell Check InfoPath Web Form in SharePoint. This time I will share how I managed to achieve Spell Checking in SharePoint 2010. This time round, we have changed our Online Forms strategy to use Custom lists instead of Form Libraries. I thought everything will be smooth sailing as we are using all OOTB features. So, we customised a Custom list form using InfoPath and added a few Rich Text Boxes (Spell Check is a requirement for this specific project). All is good in the InfoPath client including the Spell Checker so, happy days, I published straight away.Here comes the surprises now. I browsed to my Custom List and clicked Add New Item. This launched my Form in a modal dialog format. I went to my Rich Text Boxes to check the spell checker, and voila, it's disabled!I tried hacking the FormServer.aspx and the CustomSpellCheckEntirePage.js again but the new FormServer.aspx behaves differently than of MOSS 2007's. I searched for answers in many blogs to no avail. Often ending up being linked to my old blog post. I also tried placing the spell check javascript into a Content Editor Webpart of the Item's New Form and Edit form. It is launching the Spell Check dialog but it's not spellchecking the page correctly.At this point, I decided I needed to get my project across ASAP so enough with experimentations and logged a ticket with Microsoft Premier Support.On a call with the Support Engineer, I browsed through the Custom List and to the item to demonstrate my problem. Suddenly, the Spell Check tab in the toolbar is now Enabled! Surprised? Not much, it's Microsoft!Anyway, to cut my story short, here is a summary of my solution:Navigate to your Custom ListIn the Ribbon Toolbar, navigate to List > Customize List > Form Web Parts > Content Type Forms > (Item) New Form. This will display the newifs.aspx which is the page displayed when Add New Item is clicked. This page, just like any other SharePoint page, contains webparts. In this case, we have the InfoPath Form Web Part.Add a Content Editor Web Part (CEWP) on top of the InfoPath Form Web Part. (A blank CEWP would do for this example)Navigate to Page and click Stop EditingClick Add New Item again and navigate to a Rich Text box. Tadah! The Spell Check tab is now enabled!Do the same steps for the (Item) Edit Form to enable Spell Checks when editing an item.This "no code" solution discovered purely by accident!

    Read the article

  • Database Mirroring – deprecated

    - by fatherjack
    Do you use mirroring on any of your databases? Do you use mirroring on SQL Server Standard Edition? I do, as a way of having a stand-by server ready to take over if there is a problem with the live server so that business can continue despite whatever disaster may strike at our primary server location. In my experience it has been a great solution for us as it is simple to implement, reliable and predictable. Mirroring has been around since SQL Server 2005 sp1 but with the release of SQL Server 2012 mirroring has now been placed on the deprecation list. That’s right, Microsoft are removing this feature from SQL Server. SQL Server 2012 had lots of improvements and new features around this sort of technology – the High Availability, Disaster recovery and Always On features described in detail here by Brent Ozar and  Microsoft’s own Customer Service and Support SQL Server Engineers . Now the bad news, the HADRON features are pretty much all wrapped up in the Enterprise Edition of SQL Server 2012. This is going to be a big issue for people, like me, who are only on Standard Edition of earlier versions mostly due to our requirements and the budget (or lack thereof) required for Enterprise Edition licenses. No mirroring in Standard Edition means no upgrade. Don’t Panic. There are two stages of deprecation and they dont happen fast. The first stage – Deprecation Announcement- means that Microsoft have decided that there is a limited future for a particular feature and this is your cue that new projects and developments should not be implemented on this technology as it will cease to exist in the future. This is where mirroring currently stands. You have time to consider your options and start work on planning how you will move away from using this feature. This can be 2 or 3 versions of SQL Server, possibly more. The next stage is Deprecation Final Support - this is where you are on your last chance, When you see this then the next version of SQL Server will not have this feature in it so you need to implement your plans to move to an alternative solution. While these two phases are taking place Microsoft are open to feedback on how people use their products and if enough people make the case for mirroring (or an equivalent technology) to be in the Standard Edition then they may make changes rather than lose customers or have customers cease upgrading in order to keep the functionality they need. Denny Cherry (@MrDenny) has published an article on this same topic here with more detail than me so I wont go over old ground. All I will say is that you should read his article now and then follow the link to his own site where he is collecting peoples information on how they use mirroring in Standard Edition so that our voice can be put to Microsoft.  

    Read the article

  • Your interesting code tricks/ conventions? [closed]

    - by Paul
    What interesting conventions, rules, tricks do you use in your code? Preferably some that are not so popular so that the rest of us would find them as novelties. :) Here's some of mine... Input and output parameters This applies to C++ and other languages that have both references and pointers. This is the convention: input parameters are always passed by value or const reference; output parameters are always passed by pointer. This way I'm able to see at a glance, directly from the function call, what parameters might get modified by the function: Inspiration: Old C code int a = 6, b = 7, sum = 0; calculateSum(a, b, &sum); Ordering of headers My typical source file begins like this (see code below). The reason I put the matching header first is because, in case that header is not self-sufficient (I forgot to include some necessary library, or forgot to forward declare some type or function), a compiler error will occur. // Matching header #include "example.h" // Standard libraries #include <string> ... Setter functions Sometimes I find that I need to set multiple properties of an object all at once (like when I just constructed it and I need to initialize it). To reduce the amount of typing and, in some cases, improve readability, I decided to make my setters chainable: Inspiration: Builder pattern class Employee { public: Employee& name(const std::string& name); Employee& salary(double salary); private: std::string name_; double salary_; }; Employee bob; bob.name("William Smith").salary(500.00); Maybe in this particular case it could have been just as well done in the constructor. But for Real WorldTM applications, classes would have lots more fields that should be set to appropriate values and it becomes unmaintainable to do it in the constructor. So what about you? What personal tips and tricks would you like to share?

    Read the article

  • Microsoft Virtual Academy

    Carpe Diem It's been since a while that I could write an article for this blog but alas, I was (and still am) very busy with customer's work. Which is actually good. So, what is this article going to tell you? Well, in general, just what I already tweeted, that life is constant process of learning - especially as software craftsman. Due to an upcoming new customer project in ASP.NET I had to seize the opportunity to get my head deeper into latest available technologies, like Windows Azure and SQL Azure. I know... cloud computing and so on is not a recent development and already available since quite a while but I never any means to get myself into this since roughly two weeks ago. Microsoft Virtual Academy I can't remember exactly what guided me towards the Microsoft Virtual Academy (MVA), oh wait... Yes, it was a posting on Facebook from an old CLIP community friend. He posted a shortened URL with #MVA tag that caught my attention. Thanks for that Thomas Kuberek. After the usual sign in or registration via Live ID I was a little bit surprised that Mauritius is not an available country option... Quick mail exchange with the MVA Decan, and yeah, apologies for the missing entry. So, currently I'm learning about Microsoft products and services, and collecting points under "Not Listed Country" until Mauritius is going to be added. Hopefully soon, as MVA honors your effort with different knowledge ranks that are compared to other students with public profiles. I think it's a nice move to add some game and competition factor into the learning game. The tracks and their different modules are mainly references to publicly available material online, namely on either MSDN, TechNet, Channel9, or other Microsoft based sites. The course material therefore also varies in different media and formats, ranging from simple online articles over downloadable documents (.docx or .pdf) to Silverlight / Windows Media streams with download options. Self-assessment and students ranking Each module in a track can be finished by taking part in a self-assessment. Up to now, the assessment I did (and passed) were limited to 10 minutes available time, and consisted of six to seven questions on the module training material. Nothing too serious but it gives you a glimpse idea how Microsoft certification exams are structured. Conclusion Nothing really new but nicely gathered, assembled and presented to the MVA students. At the moment, I wouldn't dare to compare the richness and quality of those courses with professional training offers, like Pluralsight .NET Training, LearnDevNow, VTC, etc. at all, but I think that MVA has potential. Give it a try, and let me know about your opinions.

    Read the article

  • How to debug lag using Bluetooth connected mouse and A2DP headset?

    - by gertvdijk
    I own a Logitech M555b mouse (since a week) for use with my HP Elitebook 8570w laptop running Kubuntu 12.04. Works fine right after connecting using the KDE Bluetooth control module. However, after some time (seemingly random), it starts to lag. Movements are being delayed for roughly 500ms for a short period of time. Usually it recovers after some time too, but it can take minutes. All actions are being delayed: movements, click, scrolls. Additionally, the movements can be choppy during these times. A workaround that always works for the same short period of time is to disconnect an re-connect the mouse. This can be done using the same KDE Bluetooth control module. What did I try already? Running this at boot time: echo on > `readlink -f /sys/class/bluetooth/hci0`/../../../power/level To disable any power saving features on the Bluetooth hci0 device. Check the mouse's batteries (it's just a week old, other new batteries: same result) Checking logs and kernel messages about Bluetooth-related entries: none aside the expected messages on connect time. I'm running kernel 3.5.0-13-generic as provided in the xorg-edgers PPA. Booting the regular 3.2 Precise kernel results in the same behaviour. Some other information that may help: It happens when no other Bluetooth connections are active on the machine. Similar symptoms also occur on my Bluetooth stereo (A2DP) headset, but then it's audio lagging and skipping. Swapping Bluetooth profiles as described here then helps. Conclusion: it's not the mouse that's faulty. The headset always worked fine using my now dead Thinkpad T61p with built-in Bluetooth. The bluetooth module in my laptop is connected via USB and shows up as Bus 002 Device 003: ID 0a5c:21e1 Broadcom Corp. I'm mobile and several people around me are using Bluetooth at work (A2DP mostly). It also occurs at home, where my neighbours are probably using Bluetooth as well. It could just be radio interference, but I think Bluetooth connections should just hop to another channel. And, moreover, it just works properly instantly when re-connecting. Therefore I think it's a software driver issue and I'd like to debug it. Is there any way to get more verbose logging on the Bluetooth(-hid) modules?

    Read the article

  • Cannot get Virtualbox to install properly on Ubuntu 12.04

    - by lopac1029
    I cannot get Virtualbox to install properly on my 12.04. I first went with a manual install for the .deb from the old builds section of the Virtualbox page. That .deb opened up the Software Center and installed. Then I got the error coming up of VT-x/AMD-V hardware acceleration is not available on your system. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot. which I can only assume was due to my Ubuntu version being 32-bit (System Details - Overview - OC type: 32-bit, right?) So I followed these instructions to remove the .deb manually, restarted my laptop, and then FOUND the actual Virtualbox install in the Software Center and installed from that (assuming it would give me the correct version I need for my system) So after all that (and then some), I'm still getting the same error when I connect to my new job's project in Virtualbox. Can anyone point me in the right direction of what to do here? This is the first time I've ever worked with Virtualbox, and no one at this company is using Ubuntu, so I'm on my own here. EDIT: Here is the direct info from running the 2 suggested commands Inspiron-1750-brick:~ $lscpu Architecture: i686 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 23 Stepping: 10 CPU MHz: 2100.000 BogoMIPS: 4189.45 L1d cache: 32K L1i cache: 32K L2 cache: 2048K Inspiron-1750-brick:~ $cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T6500 @ 2.10GHz stepping : 10 microcode : 0xa07 cpu MHz : 1200.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm bogomips : 4189.80 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T6500 @ 2.10GHz stepping : 10 microcode : 0xa07 cpu MHz : 1200.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm bogomips : 4189.45 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:

    Read the article

  • Memory Glutton

    - by AreYouSerious
    I have to admit that I can't get enough storage. I have hard drives just sitting around in case I need to move somthing, or I'm going to a friends and either they want something I have or I want something they might have. What I'm going to talk about today is cost effective memory for devices. I don't know how this particualr device will work in a camera, as That's not what I use in my camera, in fact I don't have a camera that doesn't either use SD, or the old compact flash card, that's not so compact anymore. There's this thing that uses two micro sd cards to double the capacity of your memory, and it costs about 4 bucks, without the Micro SD card. I have had one for about a year and was going to throw it away because I couldn't get it to work with my computer, or with my Sony Reader. However I found out by one last ditch effort that this thing works beautifully with my Sony PSP. there is no software to speak of associated with this thing, you simply put in two SD cards of the same size... (if you put in two different sizes it will still work, you'll only double the smallest cards size though) and format through the psp. Viola you know have a 29 GB memory card for your PSP. why is this important ? well for starters you can carry more music and more videos. Second if you have gone the way of the hacker.... you can store more games on your card... There are just a few things you have to note.... I speak from experience... you have to use the usb connection to the PSP to do any file moving, as I said previously said card doesn't play well with my computers or card readers... I not saying it won't work at all, just hasn't work with anything I own. Second. If for some reason you try to Hack/crack your PSP don't attempt to delete a game from the psp, use the usb file browser to remove games. if you delete from the PSP you are likely to have to move all your files off, reformat and start again... just a couple things I have noticed... if I had done something like that.   anyway, Here's a link.... http://www.photofast-adapter.com/  and if you want to buy one, get it off ebay, I've seen them as low as $1.99

    Read the article

  • MPEG-2 playback inconsistent

    - by DustByte
    Many years ago I gave up on Linux because video playback was choppy. Now I'm back, and video playback is still playing up... I have two MPEG files: good.mpg bad.mpg. Here is some information about the two files, using avprobe: My machine is Intel Core 2 Duo E8400 @ 3.00GHz x 2, 64-bit. I do not know what graphics card I have. I run Ubuntu 12.04. So far I have had no problems with YouTube and playback of various video files, including playback of the file good.mpg, included in the avprobe snapshot above. However, the file bad.mpg gives me headache! The file bad.mpg is produced by a respectable "Old-video-tapes-to-DVD" company. I converted over 10 Video-8 tapes to MPEG through them, and today I collected my hard drive containing the MPEG files. Unfortunately I have problem watching them! Here are some details: Using Totem Movie Player 3.0.1 works well for several seconds, then it gets choppy and the playback is not at all smooth. Also the player easily freezes for a while when trying to jump to another position in the file. Most strangely though, the total time is shown as 0:42 (42 seconds) instead of the true 00:39:11: The VLC media player is doing a better job. It shows the correct total length, but as soon as I jump in the video to a new position, it stalls. Playback also stalls after 30 seconds if I press play and leave it. Using Handbrake and choosing bad.mpg as the source, gives me: There is only one title to choose, and it is 6 min 53 seconds. I would have guessed the full 39 minutes of the video should have shown. Lastly, putting the file bad.mpg in Dropbox and viewing it on my iPad with the Dropbox app seems fine (disregard the lack of easy jumping forward due to real-time encoding when streaming it). My question is simple: What is going on?! Why do I have problem to play the MPEG-2 files I just paid good money for (the issue with bad.mpg applies to all files I had encoded)? Is it an issue with my particular Linux machine? The graphics card? But why has everything worked fine so far, and why does not the good.mpg file cause any problems?

    Read the article

  • SQL SERVER – QUOTED_IDENTIFIER ON/OFF Explanation and Example – Question on Real World Usage

    - by Pinal Dave
    This is a follow up blog post of SQL SERVER – QUOTED_IDENTIFIER ON/OFF and ANSI_NULL ON/OFF Explanation. I wrote that blog six years ago and I had plans that I will write a follow up blog post of the same. Today, when I was going over my to-do list and I was surprised that I had an item there which was six years old and I never got to do that. In the earlier blog post I wrote about exploitation of the Quoted Identifier and ANSI Null. In this blog post we will see a quick example of Quoted Identifier. However, before we continue this blog post, let us see a refresh what both of Quoted Identifider do. QUOTED IDENTIFIER ON/OFF This option specifies the setting for use of double quotes. When this is on, double quotation mark is used as part of the SQL Server identifier (object name). This can be useful in situations in which identifiers are also SQL Server reserved words. In simple words when we have QUOTED IDENTIFIER ON, anything which is wrapped in double quotes becomes an object. E.g. -- The following will work SET QUOTED_IDENTIFIER ON GO CREATE DATABASE "Test1" GO -- The following will throw an error about Incorrect syntax near 'Test2'. SET QUOTED_IDENTIFIER OFF GO CREATE DATABASE "Test2" GO This feature is particularly helpful when we are working with reserved keywords in SQL Server. For example if you have to create a database with the name VARCHAR or INT or DATABASE you may want to put double quotes around your database name and turn on quoted identifiers to create a database with the such name. Personally, I do not think so anybody will ever create a database with the reserve keywords intentionally, as it will just lead to confusion. Here is another example to give you further clarity about how Quoted Idenifier setting works with SELECT statement. -- The following will throw an error about Invalid column name 'Column'. SET QUOTED_IDENTIFIER ON GO SELECT "Column" GO -- The following will work SET QUOTED_IDENTIFIER OFF GO SELECT "Column" GO Personally, I always use the following method to create database as it works irrespective of what is the quoted identifier’s status. It always creates objects with my desire name whenever I would like to create. CREATE DATABASE [Test3] I believe the future of the quoted identifier on or off is useful in the real world when we have script generated from another database where this setting was ON and we have to now execute the same script again in our environment again. Question to you - I personally have never used this feature as I mentioned earlier. I believe this feature is there to support the scripts which are generated in another SQL Database or generate the script for other database. Do you have a real world scenario where we need to turn on or off Quoted Identifiers. Click to Download Scripts Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • One Week To Go: OTN Architect Day: Cloud Computing

    - by Bob Rhubart
    One week remains until OTN Architect Day: Cloud Computing kicks of at the spectacular Oracle HQ campus in Redwood Shores, CA. The event is free, and there is still time to register. When: Tuesday July 9, 2013 8:30am - 12:30pm Where: Oracle Conference Center350 Oracle Pkwy Redwood City, CA 94065 Register now. It's free! Here's the latest update to the event agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Markus Michalewicz Senior Principal Product Manager Oracle Real Application Clusters (RAC) New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Markus Michalewicz respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • One Week To Go: OTN Architect Day: Cloud Computing

    - by Bob Rhubart
    One week remains until OTN Architect Day: Cloud Computing kicks of at the spectacular Oracle HQ campus in Redwood Shores, CA. The event is free, and there is still time to register. When: Tuesday July 9, 2013 8:30am - 12:30pm Where: Oracle Conference Center350 Oracle Pkwy Redwood City, CA 94065 Register now. It's free! Here's the latest update to the event agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Markus Michalewicz Senior Principal Product Manager Oracle Real Application Clusters (RAC) New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Markus Michalewicz respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • Ubuntu Sluggish and Graphics Problem after Nvidia Driver Update

    - by iam
    I just recently started using Ubuntu (12.04) since a few weeks ago and noticed that the interface is very slow and sluggish: On Dash, I have to type the entire app name and wait a few seconds before it shows up in the search box, and a bit later before it displays search result Opening new files or applications takes also quite long and awkward Dragging icons or moving app windows around is not very spontaneous too: I have to take extra attention in moving the mouse otherwise Ubuntu would not do a correct movement or might ends up doing something incorrect instead e.g. opening the windows to full screen options or move the file to different folders, which is frustrating My PC is a few years old already (1.7 GB RAM) so this could be a reason too but when I checked in System Monitor it's hardly ever consuming much memory. Plus web-surfing on Firefox is actually lightning fast (much more than Windows), so I suspect there might be something wrong with the graphics driver (mine is GeForce 7050). I checked around System Settings and found an option to update the Nvidia driver. So I tried it and restarted, as instructed. Now, I got into a big problem upon restart... as the login-screen windows (where I have to type in the password) would take several attempts to display and finally did not manage to (it'd freeze for several seconds before there's any movement again). The background screen also kept reloading several times too and at some point the screen turned black with pixelated color strips running on the bottom 1/3 of the screen, and after a long while the background screen would come up again. Eventually I'd manage to be able to access the desktop but the launcher, top menu bar and app windows border would not disappear. I searched around and found many other people have this similar problem after updating Nvidia driver too, and on some threads the suggestion is to use "killall -u $USER" in command line (it's the only thing among various online suggestions I could do, as at that point I could not access Terminal without the launcher - Ctrl-Alt-T doesn't work for me). So I did that and was able to access the desktop correctly again with launchee/menu by creating a new account. But I would still have the same problem if logging into my original account. So I just finally tried upgrading to 12.10 and now can access my original account with fully-functional desktop - the launcher, menu and windows border are all back now. However, the problem with sluggishness still remains. And now I get scared of ever having to update the Nvidia driver again! I wonder if anyone knows what's the reason that updating the Nvidia driver is causing this problem and is there a way I can update it safely in the future? I'm still not sure how to solve the problem with the sluggishness too and not sure where else to look to find a solution.

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-05

    - by Bob Rhubart
    Webcast: Oracle Maximum Availability Architecture Best Practices event.on24.com Date: Thursday, April 12, 2012 Time: 10:00 AM PDT Oracle expert Tom Kyte discusses how Oracle’s Maximum Availability Architecture can help to minimize the costs and risk of downtime. Oracle Enterprise Manager Ops Center 12c Launch - Interactive Webcast and Live Chat www.oracle.com Thursday, April 12, 2012. 9 a.m. PT / 12 p.m. ET / 4 p.m. GMT. Speakers: Steve Wilson (VP Systems Management, Oracle) John Fowler (Exec VP Systems, Oracle) Brad Cameron (VP Development, Oracle Fusion Middleware) Bill Nesheim (VP Oracle Solaris) Dennis Reno (VP Customer Portal Experience, Oracle) Mike Wookey (Chief Architect, Oracle Enterprise Manager Ops Center) Prasad Pai (Sr Director, Oracle Enterprise Manager Ops Center) 2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3 Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 Webcast Series: Data Warehousing Best Practices event.on24.com April 19, 2012 - Best Practices for Workload Management of a Data Warehouse on Oracle Exadata May 10, 2012 - Best Practices for Extreme Data Warehouse Performance on Oracle Exadata How to create a Global Rule that stores a document’s folder path in a custom metadata field | Nicolas Montoya blogs.oracle.com An illustrated how-to from Oracle Fusion Middleware A-Team blogger Nicolas Montoya. Get Proactive with Fusion Middleware | Daniel Mortimer blogs.oracle.com Daniel Mortimer shows how to access "a one stop shop for navigating to proactive support material, tools, and communication channels related to Oracle Fusion Middleware." Build an enterprise on 'other peoples' work', via SOA and cloud | Joe McKendrick www.zdnet.com Are you down with OPW? Joe McKendrick's synopsis of a recent presentation by David Linthicum focuses on reuse. Oracle Fusion Middleware Security: Unsolicited login with OAM 11g | Chris Johnson fusionsecurity.blogspot.com Chris Johnson shows how to create a shopping cart login model using "plain old HTML." How to use the Human WorkFlow Web Services | Edwin Biemond biemond.blogspot.com Oracle ACE Edwin Biemond shows how to invoke two WorkFlow web services to query the Human task in Oracle SOA Suite with your own ordering and restrictions. Bad Practice Use Case for LOV Performance Implementation in ADF BC | Andrejus Baranovskis andrejusb.blogspot.com "If you want to learn something well, there is nothing better [than] to learn bad practices first," says Oracle ACE Director Andrejus Baranovskis. Thought for the Day "The best meetings get real work done. When your people learn that your meetings actually accomplish something, they will stop making excuses to be elsewhere." — Larry Constantine

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • Configure Calendar Server 7 to Use the davUniqueId Attribute

    - by dabrain
    Starting with Calendar Server 7 Update 3 (Patch 08) we introduce a new attribute davUniqueId in the davEntity objectclass, to use as the unique identifier.  The reason behind this is quite simple, the LDAP operational attribute nsUniqueId  has been chosen as the default value used for the unique identifier. It was discovered that this choice has a potential serious downside. The problem with using nsUniqueId is that if the LDAP entry for a user, group, or resource is deleted and recreated in LDAP, the new entry would receive a different nsUniqueId value from the Directory Server, causing a disconnect from the existing account in the calendar database. As a result, recreated users cannot access their existing calendars. How To Configure Calendar Server to Use the davUniqueId Attribute? Populate the davUniqueId to the ldap users. You can create a LDIF output file only or (-x option) directly run the ldapmodify from the populate-davuniqueid shell script. # ./populate-davuniqueid -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -b "o=red" -O -o /tmp/out.ldif The ldapmodify might failed like below, in that case the LDAP entry already have the 'daventity' objectclass, in those cases run populate-davuniqueid script without the -O option. # ldapmodify -x -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -c -f /tmp/out.ldif modifying entry "uid=mparis,ou=People,o=vmdomain.tld,o=red" ldapmodify: Type or value exists (20) In this case the user 'mparis' already have the objectclass 'daventity', ldapmodify do not take care of this DN and just take the next DN (if you start ldapmodify with -c option otherwise it stop's completely) dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: objectclass objectclass: daventity - add: davuniqueid davuniqueid: 01a2c501-af0411e1-809de373-18ff5c8d Even run populate-davuniqueid without -O option or changing the outputfile to dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: davuniqueid davuniqueid: 01a2c501-af0411e1-809de373-18ff5c8d The ldapmodify works fine now. The only issue I see here is you need verify which user might need the 'daventity' objectclass as well. On the other hand start without the objectclass and only add the objectclass for the users where you get 'Objectclass violation' report. That's indicate the objectclass is missing. # ldapmodify -x -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -c -f /tmp/out.ldif modifying entry "uid=mparis,ou=People,o=vmdomain.tld,o=red" Now it is time to change the configuration to use the davuniquid attribute # ./davadmin config modify -o davcore.uriinfo.permanentuniqueid -v davuniqueid It is also needed to modfiy the search filter to use davuniqueid instead of nsuniqueid # ./davadmin config modify -o davcore.uriinfo.subjectattributes -v "cn davstore icsstatus mail mailalternateaddress davUniqueId  owner preferredlanguageuid objectclass ismemberof uniquemember memberurl mgrprfc822mailmember" Afterward IWC Calendar works fine and my test user able to access all his old events.

    Read the article

  • PPTP VPN connects via NM but goes down during SSH connection

    - by Andrea Olivato
    I setup a VPN PPTP connection via network manager and it connects correctly (I see the lock near the notification icon and the message "Vpn connection has been successfully...") As soon as I try to perform any SSH connection via the established tunnel the connection itself goes down with the message "Vpn connection failed". the SSH connection always fails at debug1: SSH2_MSG_KEXINIT sent I've looked into the system logs and this is the log Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> Starting VPN service 'pptp'... Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 7093 Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' appeared; activating connections Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: init (1) Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: starting (3) Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (Connect) reply received. Dec 12 12:25:05 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (IP4 Config Get) reply received from old-style plugin. Dec 12 12:25:05 ushuaia NetworkManager[1155]: <info> VPN Gateway: 5.98.141.210 Dec 12 12:25:06 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (IP Config Get) complete. Dec 12 12:25:06 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: started (4) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: stopping (5) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: stopped (6) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state change reason: 0 Dec 12 12:25:15 ushuaia NetworkManager[1155]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active. Dec 12 12:25:20 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' disappeared Please note that the same vpn is configured on my colleagues Windows 7 and works without problem when they use putty to connect via SSH

    Read the article

  • Repeated disconnects on WPA PEAP network

    - by exasperated
    My school has a WPA PEAP network with GTC inner authentication. I am able to connect to the network, but once I load a website or two, the network become unresponsive (i.e. in Chromium, it gets stuck at "Sending request"), and I'm eventually disconnected. Any help will be greatly appreciated. Here's some log output. I can provide more if needed: Ubuntu 13.04 3.8.0-32-generic x86_64 lsusb: 03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6235 (rev 24) lsmod: iwldvm                241872  0  mac80211              606457  1 iwldvm iwlwifi               173516  1 iwldvm cfg80211              511019  3 iwlwifi,mac80211,iwldvm dmesg: [    3.501227] iwlwifi 0000:03:00.0: irq 46 for MSI/MSI-X [    3.503541] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 [    3.527153] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG disabled [    3.527162] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS enabled [    3.527170] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled [    3.527178] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TESTMODE enabled [    3.527186] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_P2P disabled [    3.527192] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6235 AGN, REV=0xB0 [    3.527240] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [    3.551049] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [  375.153065] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [  375.159727] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [  375.553201] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [  375.559871] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 1892.110738] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 1892.117357] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5227.235372] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 5227.242122] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5817.817954] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 5817.824560] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5824.571917] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 5824.571929] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 5824.571935] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 6956.290061] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 6956.296671] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 6963.080560] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 6963.080566] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 6963.080570] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 7613.469241] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 7613.475870] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 7620.201265] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 7620.201278] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 7620.201285] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 8232.762453] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 8232.769065] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 8239.581772] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 8239.581784] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 8239.581792] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [13763.634808] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [13763.641427] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [16955.598953] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [16955.605574] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 lshw:    *-network        description: Wireless interface        product: Centrino Advanced-N 6235        vendor: Intel Corporation        physical id: 0        bus info: pci@0000:03:00.0        logical name: wlan0        version: 24        serial: b4:b6:76:a0:4b:3c        width: 64 bits        clock: 33MHz        capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless        configuration: broadcast=yes driver=iwlwifi driverversion=3.8.0-32-generic firmware=18.168.6.1 ip=10.250.169.96 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn        resources: irq:46 memory:f7c00000-f7c01fff iwlist scan: Cell 02 - Address: 24:DE:C6:B0:C7:D9                     Channel:36                     Frequency:5.18 GHz (Channel 36)                     Quality=29/70  Signal level=-81 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=0000004ff3fe419b                     Extra: Last beacon: 27820ms ago                     IE: Unknown: 0009436174436861743278                     IE: Unknown: 01088C129824B048606C                     IE: Unknown: 030124                     IE: IEEE 802.11i/WPA2 Version 1                         Group Cipher : CCMP                         Pairwise Ciphers (1) : CCMP                         Authentication Suites (1) : 802.1x                     IE: Unknown: 2D1ACC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: 3D1624001B000000FF000000000000000000000000000000                     IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00                     IE: Unknown: DD1E00904C33CC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: DD1A00904C3424001B000000FF000000000000000000000000000000           Cell 04 - Address: 24:DE:C6:B0:C3:E9                     Channel:149                     Frequency:5.745 GHz                     Quality=28/70  Signal level=-82 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=000000181f60e19c                     Extra: Last beacon: 28680ms ago                     IE: Unknown: 0009436174436861743278                     IE: Unknown: 01088C129824B048606C                     IE: Unknown: 030195                     IE: Unknown: 050400010000                     IE: IEEE 802.11i/WPA2 Version 1                         Group Cipher : CCMP                         Pairwise Ciphers (1) : CCMP                         Authentication Suites (1) : 802.1x                     IE: Unknown: 2D1ACC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: 3D1695001B000000FF000000000000000000000000000000                     IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00                     IE: Unknown: DD1E00904C33CC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: DD1A00904C3495001B000000FF000000000000000000000000000000                     IE: Unknown: DD07000B8601040817                     IE: Unknown: DD0E000B860103006170313930333032           Cell 09 - Address: 24:DE:C6:B0:C0:29                     Channel:149                     Frequency:5.745 GHz                     Quality=39/70  Signal level=-71 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=00000112fb688ede                     Extra: Last beacon: 27716ms ago ifconfig (while connected): wlan0     Link encap:Ethernet  HWaddr b4:b6:76:a0:4b:3c             inet addr:10.250.16.220  Bcast:10.250.31.255  Mask:255.255.240.0           inet6 addr: fe80::b6b6:76ff:fea0:4b3c/64 Scope:Link           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1           RX packets:230023 errors:0 dropped:0 overruns:0 frame:0           TX packets:130970 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000            RX bytes:255999759 (255.9 MB)  TX bytes:16652605 (16.6 MB) iwconfig (while connected): wlan0     IEEE 802.11abgn  ESSID:"CatChat2x"             Mode:Managed  Frequency:5.745 GHz  Access Point: 24:DE:C6:B0:C0:29              Bit Rate=6 Mb/s   Tx-Power=15 dBm              Retry  long limit:7   RTS thr:off   Fragment thr:off           Power Management:off           Link Quality=36/70  Signal level=-74 dBm             Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0           Tx excessive retries:0  Invalid misc:3   Missed beacon:0

    Read the article

  • How do I prevent having to log in on 3 separate prompts every time I start my machine?

    - by JC
    Ubuntu 11.04 Natty Narwhal, Ubuntu Classic desktop Each time I start my machine, I have to log in 3 times. I spent a week in IRCFreenode#ubuntu and got nothing but condescension. I've searched on the official Ubuntu fora for similar problems, tried every recommendation, and still get 3 login screens. As a workaround, I have reset login such that I get a login screen at startup, which I'd prefer not to get since this machine is accessible by no one but me, physically. I have gone into System Preferences Passwords and Encryption Keys, set first 'Passwords: default' to 'Default' and unlocked it, and unlocked the 'Passwords: login' key, too. Next, since that changed nothing, I set 'Passwords: login' to 'Default', and checked to make sure it was still unlocked. Again, no change, still get 3 login prompts at startup. I've checked twice to insure that I am the owner of the files; I am. At the suggestion of several people in #ubuntu, I've deleted first one, then the other password key in 'Passwords and Encryption Keys'. Still get 3 login prompts. I changed from the Unity desktop to Ubuntu Classic. While that didn't fix the above problem, it is a much more elegant desktop than Unity, and I'll keep it. From what I've read, this seems to be a Seahorse issue, but beyond that, no one seems to have a solution that works. I'm lost. This shouldn't be this difficult or annoying. I'm trying to help our local Old Time music collective get their machines switched over to Ubuntu in order to save them some money which they can use to promote their DRM-free music. But from what I've seen of Ubuntu so far on my own machine, I can't really recommend that they make this switch. I hope to be proved wrong on that point. But as it stands, if I was out of town or out of country and they ran into a problem, they'd have no way of fixing it as they're all less experienced than even I am. I'm not trying to cast aspersions on Ubuntu or Linux, but it seems pretty clear that KNOWLEDGEABLE, HELPFUL support for Ubuntu is lacking barring any desire on the problem-experiencing-user's part to avoid condescension. Having worked with, and run, several non-profits over the past 20 years, I know that getting volunteers to act professionally can be like herding cats. But an organization's reputation can be denigrated by sarcastic behaviors on the part of those who serve, effectively, as its public face. Thank you all for your help and support. Now...does anyone have a solution to my problem?

    Read the article

  • EPM 11.1.2.2.000 release - considerations

    - by THE
    (guest Article by Nancy) Please be aware with the upcoming release of EPM v11.1.2.2.000, it is highly recommended you first read the"ORACLE® ENTERPRISE PERFORMANCE MANAGEMENT SYSTEM 11.1.2.2.000 Readme" prior to installing this release. We want to highlight the "Installation Information" section which includes the following late-breaking information: Business Rules Migration to Calculation Manager Oracle Hyperion Calculation Manager has replaced Oracle Hyperion Business Rules as the mechanism for designing and managing business rules, therefore, Business Rules is no longer released with EPM System Release 11.1.2.2. If you are applying 11.1.2.2 as a maintenance release, or upgrading to Release 11.1.2.2, and have been using Business Rules in an earlier release, you must migrate to Calculation Manager rules in Release 11.1.2.2. (See Oracle Enterprise Performance Management System Installation and Configuration Guide.) Planning User Interface Enhancements This release of Planning includes a large number of user interface enhancements, as described in Oracle Hyperion Planning New Features. To optimize performance with these new features, you must implement the following recommended configuration. Server: 64-bit, 16 GB physical RAM Client: Optimized for Internet Explorer 9 and Firefox 10 or higher Client-to-Server Connectivity: High-speed internet connection or VPN connection between client and server, client-to-server ping time < 150 milliseconds for best performance The new, improved Planning user interface requires efficient browsers to handle interactivity provided through Web 2.0 like functionality. In our testing, Internet Explorer 7, Internet Explorer 8, and Firefox 3.x are not sufficient to handle such interactivity, and the responsiveness in these versions of browsers is not as fast as the user interface in the previous releases of Planning. For this reason, we strongly recommend that you upgrade your browser to Internet Explorer 9 or Firefox 10 to get responsiveness similar to what you experienced in previous releases. In some instances, the response times in Internet Explorer 7, Internet Explorer 8 and Firefox 3.x could be acceptable. Hence, we suggest that you uptake the new user interface only after you conduct an end user response test and you are satisfied with the results of these tests for these versions of browsers. Please note that it is still possible to leverage the old user interface and features from Planning Release 11.1.2.1. (For more information, see “Using the Planning Release 11.1.2.1 User Interface and Features” in the Oracle Hyperion Planning Administrator's Guide.) IBM HTTP Server and IIS Default Ports Both IBM HTTP Server and IIS Web Server use 80 as their default port. If you are using WebSphere, you must change one of these defaults so that there is no port conflict. If you have further questions, please utilize the  Planning or Essbase MOS Community.

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >