Search Results

Search found 17625 results on 705 pages for 'techno log'.

Page 298/705 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • help: cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that has contained my 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. I also tried to install on two older 250GB seagate drives in the same computer. During every attempt I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). Installation takes about 30 minutes on the 250GB drives, and about 60 minutes on the 3TB drive - not sure why. When I install on the 250GB drives, the install process finishes, the computer reboots (after the install DVD is removed), but I get a grub error 15. It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB swap 8GB /boot ext4 3TB / ext4 But I've also tried the following, just in case it matters: 100MB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue Note: I have two DVD burners in the system, hence the duplicate line above. The same install and reboot on the 250GB drives generates "grub error 15". Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04) HDD == seagate 1TB SATA3 @ 7200 rpm (current install 64-bit v10.04) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) The current ubuntu 64-bit v10.04 system boots and runs fine on a seagate 1TB.

    Read the article

  • Use Any Folder For Your Ubuntu Desktop (Even a Dropbox Folder)

    - by Trevor Bekolay
    By default, Ubuntu creates a folder called Desktop in your home directory that gets displayed on your desktop. What if you want to use something else, like your Dropbox folder? Here we look at how to use any folder for your desktop. Not only can you change your desktop folder, you can change the location of any other folder Ubuntu creates for you in your home folder, like Documents or Music – and this works in any Linux distribution using the Gnome desktop manager. In this example, we’re going to change desktop to show our Dropbox folder. Open your home folder in a File Browser by clicking on Places > Home Folder. In the Home Folder, open the .config folder. By default, .config is hidden, so you may have to show hidden folders (temporarily) by clicking on View > Show Hidden Files. Then open the .config folder by double-clicking on it. Now open the user-dirs.dirs file… If double-clicking on it does not open it in a text editor, right-click on it and choose Open with Other Application… and find a text editor like Gedit. Change the entry associated with XDG_DESKTOP_DIR to the folder you want to be shown as your desktop. In our case, this is $HOME/Dropbox. Note: The “~” shortcut for the home directory won’t work in this file (use $HOME for that), but an absolute path (i.e. a path starting with “/”) will work. Feel free to change the locations of the other folders as well. Save and close user-dirs.dirs. At this point you can either log off and then log back on to get your desktop back, or open a terminal window Applications > Accessories > Terminal and enter: killall nautilus Nautilus (the file manager in Gnome) will restart itself and display your newly chosen folder as the desktop! This is a cool trick to use any folder for your Ubuntu desktop. What did you use as your desktop folder? Let us know in the comments! Similar Articles Productive Geek Tips Sync Your Pidgin Profile Across Multiple PCs with DropboxAdd "My Dropbox" to Your Windows 7 Start MenuCreate a Keyboard Shortcut to Access Hidden Desktop Icons and FilesAdd "My Computer" to Your Windows 7 / Vista TaskbarCheck your Disk Usage on Ubuntu with Disk Usage Analyzer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • 12.04 installation started to black screen during boot today

    - by Cedric
    NOTE: Most of this question is now irrelevant. UPDATE 3 summarizes the problem as it stands. I've been running 12.04 on my Lenovo laptop for one month now (updated from 11.04), and I have not had any significant problem until today. This morning, when I boot, I pass the Grub screen, then I get to the purple loading screen with dots as usual, then for some reason I got to the terminal login, with no GUI. startx gives me a black screen. Ctrl+F7-F8 didn't help either. It's similar to: After the update today no graphical interface anymore - 12.04 I followed the instructions at the end, to flush the ATI drivers (which I had installed), and fall back to the community drivers. That made me lose the login! Now I just get a black screen after the Ubuntu loading screen. I can still access the console through recovery, and I've gotten into VESA mode once or twice (not reproducible, for some reason). I've tried various permutations of xorg.conf, without success. Xorg -configure fails for now, though I might be able to get it to work. apt-get update/upgrade doesn't improve anything either. However, both Windows and the 12.04 Live CD still work beautifully, and I know that all my data is still there. Is there any way that I could somehow take the configuration from the Live CD and roll with it? I know that I could reinstall, but that sucks, frankly, especially given that there's no straight-forward way of keeping the home (which, incidentally, is unaccessible from the Live CD) Thank you. Update: it seems that the fglrx drivers are still active, even after I've --purged them. From Xorg.0.log: [ 18.235] (WW) fglrx(0): *********************************************************** [ 18.235] (WW) fglrx(0): * DRI initialization failed * [ 18.235] (WW) fglrx(0): * kernel module (fglrx.ko) may be missing or incompatible * [ 18.235] (WW) fglrx(0): * 2D and 3D acceleration disabled * [ 18.235] (WW) fglrx(0): *********************************************************** [ 18.235] Fatal server error: [ 18.235] AddScreen/ScreenInit failed for driver 0 There's also a mention of the "fbdev" module. What is it? PARTIALLY SOLVED: I've undone the damage from the fglrx purge. I'm still mystified as to why uninstalling the packages didn't kill fglrx entirely, but I've now recovered the prompt. The solution to the DRI initialization error was to add radeon.modeset=0 to the GRUB boot options. So I'm back to being dropped to a prompt without any GUI. startx gives me a bunch of messages, though no obvious errors. I have little reason to suspect the video drivers, as they worked fine before today. There is no apparent error message in any of the log files. UPDATE: When I startx, I get an error, Plymounth command failed mountall: Disconnected from Plymouth This is all over the Internet, but I have not found anything that works for me yet. UPDATE 3: If I press ESC during boot, the splash screen (Plymouth!) disappears, and I no longer have any error from Plymouth. The last error message is: Stopping mount filesystems on boot I can then Ctrl+Alt+F1 to get the TTY1, but startx still does not work. Sadly, the Internet knows nothing about this error message, and neither do I. Help!

    Read the article

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • All libGDX input statements are returning TRUE at once

    - by MowDownJoe
    I'm fooling around with Box2D and libGDX and running into a peculiar problem with polling for input. Here's the code for the Screen's render() loop: @Override public void render(float delta) { Gdx.gl20.glClearColor(0, 0, .2f, 1); Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT); camera.update(); game.batch.setProjectionMatrix(camera.combined); debugRenderer.render(world, camera.combined); if(Gdx.input.isButtonPressed(Keys.LEFT)){ Gdx.app.log("Input", "Left is being pressed."); pushyThingyBody.applyForceToCenter(-10f, 0); } if(Gdx.input.isButtonPressed(Keys.RIGHT)){ Gdx.app.log("Input", "Right is being pressed."); pushyThingyBody.applyForceToCenter(10f, 0); } world.step((1f/45f), 6, 2); } And the constructor is largely just setting up the World, Box2DDebugRenderer, and all the Bodies in the world: public SandBox(PhysicsSandboxGame game) { this.game = game; camera = new OrthographicCamera(800, 480); camera.setToOrtho(false); world = new World(new Vector2(0, -9.8f), true); debugRenderer = new Box2DDebugRenderer(); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.set(100, 300); body = world.createBody(bodyDef); CircleShape circle = new CircleShape(); circle.setRadius(6f); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = circle; fixtureDef.density = .5f; fixtureDef.friction = .4f; fixtureDef.restitution = .6f; fixture = body.createFixture(fixtureDef); circle.dispose(); BodyDef groundBodyDef = new BodyDef(); groundBodyDef.position.set(new Vector2(0, 10)); groundBody = world.createBody(groundBodyDef); PolygonShape groundBox = new PolygonShape(); groundBox.setAsBox(camera.viewportWidth, 10f); groundBody.createFixture(groundBox, 0f); groundBox.dispose(); BodyDef pushyThingyBodyDef = new BodyDef(); pushyThingyBodyDef.type = BodyType.DynamicBody; pushyThingyBodyDef.position.set(new Vector2(400, 30)); pushyThingyBody = world.createBody(pushyThingyBodyDef); PolygonShape pushyThingyShape = new PolygonShape(); pushyThingyShape.setAsBox(40f, 10f); FixtureDef pushyThingyFixtureDef = new FixtureDef(); pushyThingyFixtureDef.shape = pushyThingyShape; pushyThingyFixtureDef.density = .4f; pushyThingyFixtureDef.friction = .1f; pushyThingyFixtureDef.restitution = .5f; pushyFixture = pushyThingyBody.createFixture(pushyThingyFixtureDef); pushyThingyShape.dispose(); } Testing this on the desktop. Basically, whenever I hit the appropriate keys, neither of the if statements in the loop return true. However, when I click in the window, both statements return true, resulting in a 0 net force on the body. Why is this?

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • Microsoft Ergonomic 7000 keyboard + mouse lag

    - by user115210
    I recently bought a new Microsoft Ergonomic 7000 keyboard. I started to use it with my Ubuntu 12.04 and it lags all the time. I try to be more specific: Even on a minor CPU usage the mouse lags. By minor I mean firefox loading a webpage, or opening an application like conky, gnome-terminal etc. When higher CPU usage occurs the keyboard is lagging too, but by this I mean it misses my hits, so what I type won't appear later. What I tried so far (and did not work)? Disable autosuspend (echo -1 to sys/bus/usb.../autosuspend) and at the same place set level to "on". I have tried several video drivers: Vesa, radeon, newest catalyst (and catalyst beta too) When my keyboard and/or mouse lags I tried an other USB keyboard which works perfectly and the same for the mouse. I tried the keyboard and mouse on a different computer with Linux (Ubuntu, Arch, OpenSuse) too, the same problem appears but not on Windows. I tried to replace the battery sets, and to change channel on the dongle. And also tried to use the dongle from other USB ports. On the same time I am able to use any other wireless mouse. I changed the XkbModel to "microsoft7000" but it did not solve anything. About the hardware: AMD A8 3870K - Radeon HD6550D 8 GB of memory 4 GB of swap (which is almost never used) Here are my PC's details: lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 045e:071d Microsoft Corp. Bus 005 Device 002: ID 0461:4ea7 Primax Electronics, Ltd lspci: 00:00.0 Host bridge: Advanced Micro Devices [AMD] Family 12h Processor Root Complex 00:01.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI BeaverCreek [Radeon HD 6550D] 00:11.0 SATA controller: Advanced Micro Devices [AMD] Hudson SATA Controller [IDE mode] (rev 40) 00:12.0 USB controller: Advanced Micro Devices [AMD] Hudson USB OHCI Controller (rev 11) 00:12.2 USB controller: Advanced Micro Devices [AMD] Hudson USB EHCI Controller (rev 11) 00:13.0 USB controller: Advanced Micro Devices [AMD] Hudson USB OHCI Controller (rev 11) 00:13.2 USB controller: Advanced Micro Devices [AMD] Hudson USB EHCI Controller (rev 11) 00:14.0 SMBus: Advanced Micro Devices [AMD] Hudson SMBus Controller (rev 13) 00:14.1 IDE interface: Advanced Micro Devices [AMD] Hudson IDE Controller 00:14.2 Audio device: Advanced Micro Devices [AMD] Hudson Azalia Controller (rev 01) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] Hudson LPC Bridge (rev 11) 00:14.4 PCI bridge: Advanced Micro Devices [AMD] Hudson PCI Bridge (rev 40) 00:14.5 USB controller: Advanced Micro Devices [AMD] Hudson USB OHCI Controller (rev 11) 00:15.0 PCI bridge: Advanced Micro Devices [AMD] Device 43a0 00:15.1 PCI bridge: Advanced Micro Devices [AMD] Device 43a1 00:16.0 USB controller: Advanced Micro Devices [AMD] Hudson USB OHCI Controller (rev 11) 00:16.2 USB controller: Advanced Micro Devices [AMD] Hudson USB EHCI Controller (rev 11) 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 0 (rev 43) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 1 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 2 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 3 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 4 00:18.5 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 6 00:18.6 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 5 00:18.7 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 7 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) dmesg | tail -n 150: http://pastebin.com/sGUAAiUe cat /var/log/Xorg.0.log: http://pastebin.com/fny7ZkN4 Note: The Icon7 Twister Evolution is the replacement mouse to use.

    Read the article

  • Ubuntu 12.04 // Likewise Open // Unable to ever authenticate AD users

    - by Rob
    So Ubuntu 12.04, Likewise latest from the beyondtrust website. Joins domain fine. Gets proper information from lw-get-status. Can use lw-find-user-by-name to retrieve/locate users. Can use lw-enum-users to get all users. Attempting to login with an AD user via SSH generates the following errors in the auth.log file: Nov 28 19:15:45 hostname sshd[2745]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:15:45 hostname sshd[2745]: PAM adding faulty module: pam_winbind.so Nov 28 19:15:51 hostname sshd[2745]: error: PAM: Authentication service cannot retrieve authentication info for DOMAIN\\user.name from remote.hostname Nov 28 19:16:06 hostname sshd[2745]: Connection closed by 10.1.1.84 [preauth] Attempting to login via the LightDM itself generates similar errors in the auth.log file. Nov 28 19:19:29 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:29 hostname lightdm: PAM adding faulty module: pam_winbind.so Nov 28 19:19:47 hostname lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "DOMAIN\user.name" Nov 28 19:19:52 hostname lightdm: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:19:54 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:54 hostname lightdm: PAM adding faulty module: pam_winbind.so Attempting to login via a console on the system itself generates slightly different errors: Nov 28 19:31:09 hostname login[997]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:31:09 hostname login[997]: PAM adding faulty module: pam_winbind.so Nov 28 19:31:11 hostname login[997]: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:31:14 hostname login[997]: FAILED LOGIN (1) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info Nov 28 19:31:31 hostname login[997]: FAILED LOGIN (2) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info I am baffled. The errors obviously are correct, the file /lib/security/pam_winbind.so does not exist. If its a dependancy/required, surely it should be part of the package? I've installed/reinstalled, I've used the downloaded package from the beyondtrust website, i've used the repository, nothing seems to work, every method of installing this application generates the same errors for me. UPDATE : Hrmm, I thought likewise didn't use native winbind but its own modules. Installing winbind from apt-get uninstalls pbis-open (likewise) and generates failures when installing if pbis-open is installed first. Uninstalled winbind, reinstalled pbis-open, same issue as above. The file pam_winbind.so does not exist in that location. Setting up pbis-open-legacy (7.0.1.918) ... Installing Packages was successful This computer is joined to DOMAIN.LOCAL New libraries and configurations have been installed for PAM and NSS. Clearly it thinks it has installed it, but it hasn't. It may be a legacy issue with the previous attempt to configure domain integration manually with winbind. Does anyone have a working likewise-open installation and does the /etc/nsswitch.conf include references to winbind? Or do the /etc/pam.d/common-account or /etc/pam.d/common-password reference pam_winbind.so? I'm unsure if those entries are just legacy or setup by likewise. UPDATE 2 : Complete reinstall of OS fixed it and it worked seamlessly, like it was meant to and those 2 PAM files did NOT include entries for pam_winbind.so, so that was the underlying problem. Thanks for the assist.

    Read the article

  • Using GMail's SMTP and IMAP servers in Notification Mailer

    - by Saroja Kandepuneni
    Overview GMail offers free, reliable, popular SMTP and IMAP services, because of which many people are interested to use it. GMail can be used when there are no in-house SMTP/IMAP servers for testing or debugging purposes. This blog explains how to install GMail SSL certificate in Concurrent Tier, testing the connection using a standalone program, running Mailer diagnostics and configuring GMail IMAP and SMTP servers for Workflow Notification Mailer Inbound and Outbound connections. GMail servers configuration SMTP server Host Name  smtp.gmail.com SSL Port  465 TLS/SSL required  Yes User Name  Your full email address (including @gmail.com or @your_domain.com) Password  Your gmail passwor  IMAP server  Host Name imap.gmail.com  SSL Port 993 TLS/SSL Required Yes  User Name  Your full email address (including @gmail.com or @your_domain.com)  Password Your gmail password GMail SSL Certificate Installation The following is the procedure to install the GMail SSL certificate Copy the below GMail SSL certificate to a file eg: gmail.cer -----BEGIN CERTIFICATE-----MIIDWzCCAsSgAwIBAgIKaNPuGwADAAAisjANBgkqhkiG9w0BAQUFADBGMQswCQYDVQQGEwJVUzETMBEGA1UEChMKR29vZ2xlIEluYzEiMCAGA1UEAxMZR29vZ2xlIEludGVybmV0IEF1dGhvcml0eTAeFw0xMTAyMTYwNDQzMDRaFw0xMjAyMTYwNDUzMDRaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKEwpHb29nbGUgSW5jMRcwFQYDVQQDEw5pbWFwLmdtYWlsLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAqfPyPSEHpfzvXx+9zGUxoxcOXFrGKCbZ8bfUd8JonC7rfId32t0gyAoLCgM6eU4lN05VenNZUoChL/nrX+ApdMQv9UFV58aYSBMU/pMmK5GXansbXlpHao09Mc8eur2xV+4cnEtxUvzpco/OaG15HDXcr46c6hN6P4EEFRcb0ccCAwEAAaOCASwwggEoMB0GA1UdDgQWBBQj27IIOfeIMyk1hDRzfALz4WpRtzAfBgNVHSMEGDAWgBS/wDDr9UMRPme6npH7/Gra42sSJDBbBgNVHR8EVDBSMFCgTqBMhkpodHRwOi8vd3d3LmdzdGF0aWMuY29tL0dvb2dsZUludGVybmV0QXV0aG9yaXR5L0dvb2dsZUludGVybmV0QXV0aG9yaXR5LmNybDBmBggrBgEFBQcBAQRaMFgwVgYIKwYBBQUHMAKGSmh0dHA6Ly93d3cuZ3N0YXRpYy5jb20vR29vZ2xlSW50ZXJuZXRBdXRob3JpdHkvR29vZ2xlSW50ZXJuZXRBdXRob3JpdHkuY3J0MCEGCSsGAQQBgjcUAgQUHhIAVwBlAGIAUwBlAHIAdgBlAHIwDQYJKoZIhvcNAQEFBQADgYEAxHVhW4aII3BPrKQGUdhOLMmdUyyr3TVmhJM9tPKhcKQ/IcBYUev6gLsB7FH/n2bIJkkIilwZWIsj9jVJaQyJWP84Hjs3kus4fTpAOHKkLqrbIZDYjwVueLmbOqr1U1bNe4E/LTyEf37+Y5hcveWBQduIZnHn1sDE2gA7LnUxvAU=-----END CERTIFICATE----- Install the SSL certificate into the default JRE location or any other location using below command Installing into a dfeault JRE location in EBS instance         # keytool -import -trustcacerts -keystore $AF_JRE_TOP/lib/security/cacerts  -storepass changeit -alias gmail-lnx_chainnedcert -file gmail.cer Install into a custom location         # keytool -import -trustcacerts -keystore <customLocation>  -storepass changeit -alias gmail-lnx_chainnedcert -file gmail.cer       <customLocation> -- directory in instance where the certificate need to be installed After running the above command you can see the following response         Trust this certificate? [no]:  yes        Certificate was added to keystore Running Mailer Command Line Diagnostics Run Mailer command line diagnostics from conccurrent tier where Mailer is running, to check the IMAP connection using the below command $AFJVAPRG -classpath $AF_CLASSPATH -Dprotocol=imap -Ddbcfile=$FND_SECURE/$TWO_TASK.dbc -Dserver=imap.gmail.com -Dport=993 -Dssl=Y -Dtruststore=$AF_JRE_TOP/lib/security/cacerts -Daccount=<gmail username> -Dpassword=<password> -Dconnect_timeout=120 -Ddebug=Y -Dlogfile=GmailImapTest.log -DdebugMailSession=Y oracle.apps.fnd.wf.mailer.Mailer Run Mailer command line diagnostics from concurrent tier where Mailer is running, to check the SMTP connection using the below command   $AFJVAPRG -classpath $AF_CLASSPATH -Dprotocol=smtp -Ddbcfile=$FND_SECURE/$TWO_TASK.dbc -Dserver=smtp.gmail.com -Dport=465 -Dssl=Y -Dtruststore=$AF_JRE_TOP/lib/security/cacerts -Daccount=<gmail username> -Dpassword=<password> -Dconnect_timeout=120 -Ddebug=Y -Dlogfile=GmailSmtpTest.log -DdebugMailSession=Y oracle.apps.fnd.wf.mailer.Mailer Standalone program to verify the IMAP connection Run the below standalone program from the concurrent tier node where Mailer is running to verify the connection with GMail IMAP server. It connects to the Gmail IMAP server with the given GMail user name and password and lists all the folders that exist in that account. If the Gmail IMAP server is not working for the  Mailer check whether the PROCESSED and DISCARD folders exist for the GMail account, if not create manually by logging into GMail account.Sample program to test GMail IMAP connection  The standalone program can be run as below  $java GmailIMAPTest GmailUsername GMailUserPassword            Standalone program to verify the SMTP connection Run the below standalone program from the concurrent tier node where Mailer is running to verify the connection with GMail SMTP server. It connects to the GMail SMTP server by authenticating with the given user name and password  and sends a test email message to the give recipient user email address. Sample program to test GMail SMTP connection The standalone program can be run as below  $java GmailSMTPTest GmailUsername gMailPassword recipientEmailAddress    Warnings As gmail.com is an external domain, the Mailer concurrent tier should allow the connection with GMail server Please keep in mind when using it for corporate facilities, that the e-mail data would be stored outside the corporate network

    Read the article

  • Required Parameters [SSIS Denali]

    - by jamiet
    SQL Server Integration Services (SSIS) in its 2005 and 2008 incarnations expects you to set a property values within your package at runtime using Configurations. SSIS developers tend to have rather a lot of issues with SSIS configurations; in this blog post I am going to highlight one of those problems and how it has been alleviated in SQL Server code-named Denali.   A configuration is a property path/value pair that exists outside of a package, typically within SQL Server or in a collection of one or more configurations in a file called a .dtsConfig file. Within the package one defines a pointer to a configuration that says to the package “When you execute, go and get a configuration value from this location” and if all goes well the package will fetch that configuration value as it starts to execute and you will see something like the following in your output log: Information: 0x40016041 at Package: The package is attempting to configure from the XML file "C:\Configs\MyConfig.dtsConfig". Unfortunately things DON’T always go well, perhaps the .dtsConfig file is unreachable or the name of the SQL Sever holding the configuration value has been defined incorrectly – any one of a number of things can go wrong. In this circumstance you might see something like the following in your log output instead: Warning: 0x80012014 at Package: The configuration file "C:\Configs\MyConfig.dtsConfig" cannot be found. Check the directory and file name. The problem that I want to draw attention to here though is that your package will ignore the fact it can’t find the configuration and executes anyway. This is really really bad because the package will not be doing what it is supposed to do and worse, if you have not isolated your environments you might not even know about it. Can you imagine a package executing for months and all the while inserting data into the wrong server? Sounds ridiculous but I have absolutely seen this happen and the root cause was that no-one picked up on configuration warnings like the one above. Happily in SSIS code-named Denali this problem has gone away as configurations have been replaced with parameters. Each parameter has a property called ‘Required’: Any parameter with Required=True must have a value passed to it when the package executes. Any attempt to execute the package will result in an error. Here we see that error when attempting to execute using the SSMS UI: and similarly when executing using T-SQL: Error is: Msg 27184, Level 16, State 1, Procedure prepare_execution, Line 112 In order to execute this package, you need to specify values for the required parameters.   As you can see, SSIS code-named Denali has mechanisms built-in to prevent the problem I described at the top of this blog post. Specifying a Parameter required means that any packages in that project cannot execute until a value for the parameter has been supplied. This is a very good thing. I am loathe to make recommendations so early in the development cycle but right now I’m thinking that all Project Parameters should have Required=True, certainly any that are used to define external locations should be anyway. @Jamiet

    Read the article

  • Ubuntu 12.04 - No sound - HELP!

    - by Bruno Tacca
    I'm panicking... my sound stopped working after I tried to set-up my notebook speakers, plus two headphone jacks... My idea was to multichannel the sound to 3 channels, built-in speakers, and sound-card 2 headphone jacks. After a couple efforts I did it with 2 channels, speakers and 1 headphone jack, but the other wasn't working. After more tries and tries, sound stop working. I just want my sound back... crying like a baby on the floor. And, if possible, but not necessary, a simple guide to active the 3 channels. xD I will post the diagnosis according to https://help.ubuntu.com/community/SoundTroubleshootingProcedure STEP 1 Did it, still no sound. STEP 2 Did it, still no sound. STEP 3 and #STEP 4 (I removed the log cause there is a limit of characters to be posted.) The log can be found here: https://answers.launchpad.net/ubuntu/+source/alsa-driver/+question/238653 STEP 5 Rebooted, still no sound. STEP 6 Did it. In the Output Devices tab, nothing is muted. I play a music with the Rhythmbox Music Player, I don't hear anything but in the pavucontrol I can see in the Built-in Audio Analog Stereo a sound bar shaking... but, no sound. STEP 7 In alsamixer, AlsaMixer v1.0.25 Card: HDA Intel PCH Chip: Creative CA0132 information View: F3:[Playback] F4: Capture F5: All Item: Headphone [dB gain: 25.00, 25.00] Then, I have 5 columns Headphone, Speaker, PCM, S/PDIF, S/PDIF Default PCM A little weird when I try to mute the Headphone and the Speaker, here what happens: Starting both unmutted, mutting headphone cause speaker being mutted automaticaly. Starting both unmutted, mutting speaker cause headphone being mutted automaticaly. Starting both mutted, possible to unmute both separately. STEP 8 I cannot hear sound on both (headphone and/or speaker). STEP 9 Dual boot... Restarted, windows was with sound at max volume. Restarted again, still no sound at ubuntu. I heard something when ubuntu started, a little noise, then silence again. The sound icon always start mutted, after unmutting, I have no sound. STEP 10 I dont have this command in my ubuntu. STEP 11 Tried at STEP 8, no sound. There are no problem with jumpers or hardware, cause I have sound working on windows. STEP 12 No way to open my alienware and loss the warranty x.X" STEP 13 I think it's loaded, judging my the logs STEP 14 Alienware M17xR4, the hardware is listed in the logs above, at STEP 4. There are two headphone hacks, one with just an headphone printed above, and the other with an headset (with mic) printed, there is a mic jack too, and a spdif (optical) too. STEP 15 I dont want to enable S/PDIF STEP 16 I never used the HDMI output, yet... Thanks in advance. I hope I listed all the information you need.

    Read the article

  • Where's My Windows Azure Subscriptions

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/03/wheres-my-windows-azure-subscriptions.aspxYesterday when I opened Windows Azure manage portal I found some resources were missed. I checked the website for those missed cloud service and they are still live. Then I checked my billing history but didn't found any problem. When I back to the portal I found that all of those resource are under my MSDN subscription. So I remembered that if this is related with the recently Windows Azure platform update.   This feature named "Enterprise Management", which provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution. By default, all existing windows azure account would have a default Windows Azure Active Directory (a.k.a. WAAD) associated. In the address bar I can find the default login WAAD of my account, which is "microsoft.onmicrosoft.com". To change the WAAD we can click "subscriptions" on top of the manage portal, select the active directory from the list of "filter by directory" and select the subscription we want to see, then press "apply". As you can see, the subscription under my MSDN was located in a WAAD named "beijingtelecom.onmicrosoft.com". This is because when Microsoft applied this feature, they will check if you have an existing WAAD in your subscription. If not, it will create a new one, otherwise it will use your WAAD and move your subscription into this directory. Since I created a WAAD for test several months ago, this subscription was moved to this directory.   To change the subscription's directory is simple. First we need to create a new WAAD with the name we preferred. As below I created a new directory named "shaunxu". Then select "settings" from the left navigation bar, select the subscription we wanted to change and click "edit directory". You don't have the permission to edit/change directory unless your Microsoft Account is the service administrator of this subscription. Then in the popup window, select the WAAD you want to change and press "next". All done. You need to log off and log in the portal then your subscription will be in the directory you wanted. And after these steps I can view my resources in this subscription.   Summary In this post I described how to change subscriptions into a new directory. With this new feature we can manage our Windows Azure subscription more flexible. But there are something we need keep in mind. 1. Only the service administrator could be able to move subscription. 2. Currently there's no way for us to see our Windows Azure services in more than one directory at the same time. Like me, I can see my services under "shaunxu.onmicrosoft.com" and I must change the filter directory from the "subscriptions" menu to see other services under "microsoft.onmicrosoft.com". 3. Currently we cannot delete an existing WAAD.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Use Twitter in Windows Media Center with TwitterMCE

    - by DigitalGeekery
    Are you a Media Center user who just can’t get enough Twitter? If so, you may want to check out the TwitterMCE plugin for Windows Media Center. Download and install TwitterMCE application. (See download link below) When you start Windows Media Center, you’ll find the TwitterMCE icon listed in the Extras. When you open the plug-in you’ll be prompted for a Paypal donation and have to wait out the 15 second timer. Next, you’ll need to log in to your Twitter account. Enter your Twitter account username and password. You can do this with the keyboard or by entering letters and numbers with a Media Center remote. When you are finished, select the Login button.   You’ll be prompted to select Standard or Video Mode. Standard displays items in a more vertical fashion. Video displays them horizontally and one at a time, and also allows you to watch Live TV, a movie, or video at the same time. Reading Your Tweets Clicking on Home allows you to read the latest Twitter messages from your friends. You can access the previous 20 tweets. Scroll up and down to see additional messages in Standard mode, ro right and left in Video mode. Click on the individual Twitter messages to get more information, such as which friend sent the tweet. Create a Tweet To Create a Tweet directly from Media Center, select the Update button. Type out your message using your keyboard or your remote and the on-screen keyboard. When you are finished, select Update to send your Twitter message. A few moments later your new tweet will appear.   To send a tweet while you are watching TV or a video, log in to the TwitterMCE app, choose the Video mode, and select Update.   Enter your tweet with the remote or keyboard. Select Update to send the tweet.   You can also view Mentions, Friends, and Followers selecting the appropriate button.   Scroll through your list of friends to read their latest tweets.   The TwitterMCE plugin works will Windows Vista Premium, Ultimate, and Windows 7. It might not completely replace your favorite Twitter App, but it will allow you to send all the tweets you want without having to take your eyes off your favorite TV programs. Download TwitterMCE Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Schedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program GuideIntegrate Boxee with Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday

    Read the article

  • Accessing home directory hangs

    - by Jeff
    Occasionally my laptop will hang when trying to access my home directory. The only fix so far is to reboot and then it goes away for a week. /var/log/kern.log has the following error: Nov 21 13:54:39 Laptop1 kernel: [231480.428107] INFO: task ls:10104 blocked for more than 120 seconds. Nov 21 13:54:39 Laptop1 kernel: [231480.428114] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Nov 21 13:54:39 Laptop1 kernel: [231480.428120] ls D f5dbf6a0 0 10104 9964 0x00000004 Nov 21 13:54:39 Laptop1 kernel: [231480.428130] f3edbd40 00000086 00000001 f5dbf6a0 00000000 00000001 c175dfe0 c1868ec0 Nov 21 13:54:39 Laptop1 kernel: [231480.428145] c1868ec0 054eadaf 0000d250 f5005ec0 ee940cc0 f3e5b300 f3edbcf8 c10e8bfa Nov 21 13:54:39 Laptop1 kernel: [231480.428159] f3edbd10 f3edbd10 f3edbd10 c102b505 fffba7e8 089fa000 f3edbd38 c10fb528 Nov 21 13:54:39 Laptop1 kernel: [231480.428173] Call Trace: Nov 21 13:54:39 Laptop1 kernel: [231480.428189] [<c10e8bfa>] ? lru_cache_add_lru+0x2a/0x50 Nov 21 13:54:39 Laptop1 kernel: [231480.428199] [<c102b505>] ? __kunmap_atomic+0x75/0xa0 Nov 21 13:54:39 Laptop1 kernel: [231480.428207] [<c10fb528>] ? do_anonymous_page+0x1f8/0x280 Nov 21 13:54:39 Laptop1 kernel: [231480.428218] [<c152b656>] __mutex_lock_slowpath+0xc6/0x120 Nov 21 13:54:39 Laptop1 kernel: [231480.428225] [<c152b304>] mutex_lock+0x24/0x40 Nov 21 13:54:39 Laptop1 kernel: [231480.428246] [<f83ab87c>] cifs_reconnect_tcon+0x13c/0x2a0 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428255] [<c152fa00>] ? vmalloc_fault+0xee/0xee Nov 21 13:54:39 Laptop1 kernel: [231480.428262] [<c152fc2f>] ? do_page_fault+0x22f/0x4a0 Nov 21 13:54:39 Laptop1 kernel: [231480.428276] [<f83abe3c>] smb_init+0x2c/0x90 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428285] [<c11aa42e>] ? ext4_htree_store_dirent+0x2e/0x120 Nov 21 13:54:39 Laptop1 kernel: [231480.428301] [<f83b0941>] CIFSSMBQPathInfo+0x41/0x210 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428319] [<f83c39e4>] ? cifs_get_inode_info+0x224/0x390 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428336] [<f83c3a21>] cifs_get_inode_info+0x261/0x390 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428354] [<f83bb35d>] ? build_path_from_dentry+0xcd/0x250 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428362] [<c102b69e>] ? kmap_atomic_prot+0xde/0x100 Nov 21 13:54:39 Laptop1 kernel: [231480.428370] [<c152c4cd>] ? _raw_spin_lock+0xd/0x10 Nov 21 13:54:39 Laptop1 kernel: [231480.428388] [<f83c6378>] ? _GetXid+0x58/0x80 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428405] [<f83c4f81>] cifs_revalidate_dentry_attr+0x111/0x1a0 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428423] [<f83c50e2>] cifs_getattr+0x52/0x120 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428431] [<c112c5b2>] vfs_getattr+0x42/0x70 Nov 21 13:54:39 Laptop1 kernel: [231480.428448] [<f83c5090>] ? cifs_revalidate_dentry+0x40/0x40 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428455] [<c112c647>] vfs_fstatat+0x67/0x80 Nov 21 13:54:39 Laptop1 kernel: [231480.428461] [<c112c680>] vfs_lstat+0x20/0x30 Nov 21 13:54:39 Laptop1 kernel: [231480.428468] [<c112c946>] sys_lstat64+0x16/0x30 Nov 21 13:54:39 Laptop1 kernel: [231480.428475] [<c11341ed>] ? link_path_walk+0x79d/0x8a0 Nov 21 13:54:39 Laptop1 kernel: [231480.428483] [<c152c8e4>] syscall_call+0x7/0xb

    Read the article

  • Looking under the hood of SSRS

    - by Jim Giercyk
    SSRS is a powerful tool, but there is very little available to measure it’s performance or view the SSRS execution log or catalog in detail.  Here are a few simple queries that will give you insight to the system that you never had before.   ACTIVE REPORTS:  Have you ever seen your SQL Server performance take a nose dive due to a long-running report?  If the SPID is executing under a generic Report ID, or it is a scheduled job, you may have no way to tell which report is killing your server.  Running this query will show you which reports are executing at a given time, and WHO is executing them.   USE ReportServerNative SELECT runningjobs.computername,             runningjobs.requestname,              runningjobs.startdate,             users.username,             Datediff(s,runningjobs.startdate, Getdate()) / 60 AS    'Active Minutes' FROM runningjobs INNER JOIN users ON runningjobs.userid = users.userid ORDER BY runningjobs.startdate               SSRS CATALOG:  We have all asked “What was the last thing that changed”, or better yet, “Who in the world did that!”.  Here is a query that will show all of the reports in your SSRS catalog, when they were created and changed, and by who.           USE ReportServerNative SELECT DISTINCT catalog.PATH,                            catalog.name,                            users.username AS [Created By],                             catalog.creationdate,                            users_1.username AS [Modified By],                            catalog.modifieddate FROM catalog         INNER JOIN users ON catalog.createdbyid = users.userid  INNER JOIN users AS users_1 ON catalog.modifiedbyid = users_1.userid INNER JOIN executionlogstorage ON catalog.itemid = executionlogstorage.reportid WHERE ( catalog.name <> '' )               SSRS EXECUTION LOG:  Sometimes we need to know what was happening on the SSRS report server at a given time in the past.  This query will help you do just that.  You will need to set the timestart and timeend in the WHERE clause to suit your needs.         USE ReportServerNative SELECT catalog.name AS report,        executionlogstorage.username AS [User],        executionlogstorage.timestart,        executionlogstorage.timeend,         Datediff(mi,e.timestart,e.timeend) AS ‘Time In Minutes',        catalog.modifieddate AS [Report Last Modified],        users.username FROM   catalog  (nolock)        INNER JOIN executionlogstorage e (nolock)          ON catalog.itemid = executionlogstorage.reportid        INNER JOIN users (nolock)          ON catalog.modifiedbyid = users.userid WHERE  executionlogstorage.timestart >= Dateadd(s, -1, '03/31/2012')        AND executionlogstorage.timeend <= Dateadd(DAY, 1, '04/02/2012')      LONG RUNNING REPORTS:  This query will show the longest running reports over a given time period.  Note that the “>5” in the WHERE clause sets the report threshold at 5 minutes, so anything that ran less than 5 minutes will not appear in the result set.  Adjust the threshold and start/end times to your liking.  With this information in hand, you can better optimize your system by tweaking the longest running reports first.         USE ReportServerNative SELECT executionlogstorage.instancename,        catalog.PATH,        catalog.name,        executionlogstorage.username,        executionlogstorage.timestart,        executionlogstorage.timeend,        Datediff(mi, e.timestart, e.timeend) AS 'Minutes',        executionlogstorage.timedataretrieval,        executionlogstorage.timeprocessing,        executionlogstorage.timerendering,        executionlogstorage.[RowCount],        users_1.username        AS createdby,        CONVERT(VARCHAR(10), catalog.creationdate, 101)        AS 'Creation Date',        users.username        AS modifiedby,        CONVERT(VARCHAR(10), catalog.modifieddate, 101)        AS 'Modified Date' FROM   executionlogstorage e         INNER JOIN catalog          ON executionlogstorage.reportid = catalog.itemid        INNER JOIN users          ON catalog.modifiedbyid = users.userid        INNER JOIN users AS users_1          ON catalog.createdbyid = users_1.userid WHERE  ( e.timestart > '03/31/2012' )        AND ( e.timestart <= '04/02/2012' )        AND  Datediff(mi, e.timestart, e.timeend) > 5        AND catalog.name <> '' ORDER  BY 'Minutes' DESC        I have used these queries to build SSRS reports that I can refer to quickly, and export to Excel if I need to report or quantify my findings.  I encourage you to look at the data in the ReportServerNative database on your report server to understand the queries and create some of your own.  For instance, you may want a query to determine which reports are using which shared data sources.  Work smarter, not harder!

    Read the article

  • Laptop monitor stopped working and can't be re-enabled on a Dell Latitude E6410

    - by xektrum
    I'm using Ubuntu 12.04 (upgraded from 11.10), everything seemed to work fine until today when my laptop monitor suddenly stopped working. Here are the facts: My laptop is a Dell Latitude E6410, Intel graphics. External Monitor is attached through a docking station. Everything worked fine for about 6-7 month, then upgraded to 12.04 Issue started today after a week of upgrade. I think the issue started after I ran CounterStrike 1.6, both monitors blinked and then only the attached monitor which is connected to a docking station continued to work I thought at first that was a transient issue but then I've rebooted, removed the battery but the same happens. Laptop Monitor and external monitor work fine up to login screen, but after I login it goes black Whenever I try to re-enable laptop monitor from Display Manager I get errors: The selected configuration for displays could not be applied could not set the configuration for CRTC 63 Not sure what technical details are required but here are some: $ xrandr Screen 0: minimum 320 x 200, current 3120 x 1050, maximum 8192 x 8192 eDP1 connected (normal left inverted right x axis y axis) 1440x900 60.0 + 40.0 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) $ tail /var/log/Xorg.0.log [ 8367.132] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.132] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.174] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.174] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.174] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.174] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.265] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.265] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.265] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.265] (WW) intel(0): Page flip failed: Device or resource busy I'm using gnome-shell, and the only ways I've been able to get both display working have been: 1) Booting with laptop disconnected from docking and then re attach external with VGA instead of DVI, but only worked for a session. 2) Removing xserver-xorg-video-intel, but then I gnome-shell is gone as well as dri I would appreciate any suggestions. Regards, ============================= WORKAROUND FOUND ============================= So I have tried few things and here is what worked: I've installed a newer version of xserver-xorg-video-intel (2.19 vs 2.17) from ppa:xorg-edgers/ppa, it didn't work at first, it was only showing low graphics mode, so I tried with a different linux-image 3.0.0-19-generic-pae instead of 3.2.0-24-generic-pae, which I believe is 12.04 precise default, then everything started to work again, Now I've installed 3.4.0-1-generic-pae from same ppa and everything goes flawless so I believe the issue is either with linux-image 3.0.0-19-generic-pae or xserver-xorg-video-intel 2.17. Hope this helps someone in the future. PS: Now xrandr shows multiple modes for my laptop monitor $ xrandr Screen 0: minimum 320 x 200, current 3120 x 1050, maximum 8192 x 8192 eDP1 connected 1440x900+1680+0 (normal left inverted right x axis y axis) 303mm x 189mm 1440x900 60.0*+ 59.9 40.0 1360x768 59.8 60.0 1152x864 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis)

    Read the article

  • SQL Sentry Truth-Telling and Disk Configuration

    - by AjarnMark
    Recently, SQL Sentry told me something about my SQL Server disk configurations that I just didn’t want to believe, but alas, it was true. Several days ago I posted my First Impressions of the SQL Sentry Power Suite.  Today’s post could fall into the category of, “Hey, as long as you have that fancy tool…”  Unfortunately, it also falls into the category of an overloaded worker taking someone else’s word for the truth, not verifying it with independent fact-checking, and then making decisions based on that.  Here’s my story… I’m not exactly an Accidental DBA (or Involuntary DBA as Paul Randal calls it).  I came to this company five years ago as a lead application developer with extensive experience in database design and development.  I worked my way into management, and along the way, took over the DBA responsibilities.  Fortunately, our systems run pretty smoothly most of the time, but I’m always looking for ways to make them better and to fit into my understanding of best practices.  When I took over as DBA, I inherited a SQL 2000 server with about 30 databases on it supporting our main systems, and a SQL 2005 server with multiple instances.  Both of these servers were configured with the Operating System and Application files on the C drive, data files on a different drive letter, and log files on a third drive letter.  Even before I took over as DBA, I verified that this was true with a previous server administrator, and that these represented actual separate disks.  He stated that they did, and I thought that all was well. Then one day, I’m poking around inside the SQL Sentry Performance Advisor, checking out features as I am evaluating whether to purchase the product, and I come across a Disk Configuration section.  The first thing I notice is that the drives do not have the proper partition offset, which was not at all surprising to me given the age of the installation and the relative newness of that topic.  But what threw me for a loop was that the graphic display appeared to be telling me that I did not in fact have three separate drives (or arrays) but rather had two, and that the log files were merely on a separate volume on the same physical array as the OS.  I figured that I must be reading it wrong so I scanned the Help file, but that just seemed to confirm my interpretation.  Then I thought, “there must be something wrong with the demo version of the software!  This can’t be right!”  But just to double-check, I went to our current server admin to talk it over with him, and sure enough, SQL Sentry was telling the truth! I was stunned!  I quickly went through the grieving process…denial…anger…reconciliation.  Here was something that I thought was such a basic truth that was turned upside down.  OK, granted, this wasn’t disastrous.  Our databases didn’t suddenly grind to a halt.  I didn’t get calls late at night inquiring about the sudden downturn in performance.  But it was a bit of a shock to the system, in a good way, to jolt me out of taking what I had believed as the truth for granted, and instead to Trust, but Verify! Yes, before someone else points it out, I know that there are”free” disk management tools built-in to Windows that would have told me the same thing if I had only looked at them; I did not have to buy a fancy tool to tell me that, but the fact is, until I was evaluating the tool, I had just gone with what I was told, and never bothered to check what was actually there. So, what things do you believe to be true but you actually never verified?

    Read the article

  • Upgrades in 5 Easy Pieces

    - by Anne R.
    Even though there are a few select tasks that I have to do once or twice a year, I can’t remember how to do them! Or where to find the bits and pieces to complete the task. So I love it when someone consolidates everything under one spot. That’s what the CRM On Demand team has done with the upgrade information. Specifically, they have: Provided a “one-stop” area for managing upgrades at your company. Broken down the upgrade process into 5 (yes, 5) steps. Explained when and how to perform each step with dates specific to your pod. Included details about each step, visible by expanding the step. Translated the steps into 11 languages. Added a list of release-specific resources with links from the page. Now, just head for the Training and Support portal, click the Release Info tab, and walk through the “5 Essential Steps to a Successful Upgrade.” Before you continue, though, select your language from the drop-down list on the Release Info page. CRM On Demand now has the upgrade steps translated into 11 languages. On the Step page, you can expand each section in sequence and follow the more detailed instructions that appear. This will ensure that you’ve covered all your bases for each upgrade. Here’s a shortened version of the information that you’ll find: 1. Verify your Primary Contact Information. Have you checked your primary contact information to make sure you’re being notified of all upgrade information? Or do you want more users to receive upgrade announcements? This section provides you with the navigation path to do that in CRM On Demand. 2. Review your Key Upgrade Dates. If you expand this step, a nice table appears with your critical dates for the various milestones. IMPORTANT: When your CRM On Demand pod has been officially added to the upgrade schedule, closer to the release date itself, this table will display your specific timetable. 3. Migrate your Customizations from the Staging Environment before the Snapshot Date. Oracle refreshes the Staging data with a copy of your Production data made on the Production Snapshot Date. So this section lists considerations relevant to this step. It also reminds you of the 2-week period when you should not be making any changes in your Staging environment.   4. Conduct your Upgrade Validation on the Staging Environment. When the Customer Validation Testing period begins, you need to log in to your Staging Environment to validate that your key business processes and customizations continue to behave as expected. If your company utilizes Web Services, Web Links, Web Applets or Workflow, focus on testing these first. You generally have about two weeks for testing. If you run into problems during this time, follow the instructions shown in this section for logging a service request. It describes exactly how to fill out the fields in the SR for the fastest resolution. 5. Conduct "White Glove" Testing in your Upgraded Production Environment. Before users start using the upgrade, you should access a few tabs and reports. Doing this actually warms up the cache so that frequently used pages and reports will come up at normal speed on Monday morning, when users log in to the upgraded system. Resources listed under this step help you in further preparing for the upgrade. Now there’s also a new Documentation section on the right with links to these release-specific resources.   Very nice, I commented, when discussing these improvements with the “responsible party.” She confirmed that, yes, they tried to consolidate the upgrade information, translate it for better communication, simplify it into 5 easy pieces, and drive admins responsible for handling upgrades to this one site instead of sending out elaborate emails. Yes, I just love it when someone practically reaches out and holds my hand through a process. Next best thing to a wizard!

    Read the article

  • javascript complex recurrsion [on hold]

    - by Achilles
    Given Below is my data in data array. What i am doing in code below is that from that given data i have to construct json in a special format which i also gave below. //code start here var hierarchy={}; hierarchy.name="Hierarchy"; hierarchy.children=[{"name":"","children":[{"name":"","children":[]}]}]; var countryindex; var flagExist=false; var data = [ {country :"America", city:"Kansas", employe:'Jacob'}, {country :"Pakistan", city:"Lahore", employe:'tahir'}, {country :"Pakistan", city:"Islamabad", employe:'fakhar'} , {country :"Pakistan", city:"Lahore", employe:'bilal'}, {country :"India", city:"d", employe:'ali'} , {country :"Pakistan", city:"Karachi", employe:'eden'}, {country :"America", city:"Kansas", employe:'Jeen'} , {country :"India", city:"Banglore", employe:'PP'} , {country :"India", city:"Banglore", employe:'JJ'} , ]; for(var i=0;i<data.length;i++) { for(var j=0;j<hierarchy.children.length;j++) { //for checking country match if(hierarchy.children[j].name==data[i].country) { countryindex=j; flagExist=true; break; } } if(flagExist)//country match now no need to add new country just add city in it { var cityindex; var cityflag=false; //hierarchy.children[countryindex].children.push({"name":data[i].city,"children":[]}) //if(hierarchy.children[index].children!=undefined) for(var k=0;k< hierarchy.children[countryindex].children.length;k++) { //for checking city match if(hierarchy.children[countryindex].children[k].name==data[i].city) { // hierarchy.children[countryindex].children[k].children.push({"name":data[i].employe}) cityflag=true; cityindex=k; break; } } if(cityflag)//city match now add just empolye at that city index { hierarchy.children[countryindex].children[cityindex].children.push({"name":data[i].employe}); cityflag=false; } else//no city match so add new with employe also as this is new city so its emplye will be 1st { hierarchy.children[countryindex].children.push({"name":data[i].city,children:[{"name":data[i].employe}]}); //same as above //hierarchy.children[countryindex].children[length-1].children.push({"name":data[i].employe}); } flagExist=false; } else{ //no country match adding new country //with city also as this is new city of new country console.log("sparta"); hierarchy.children.push({"name":data[i].country,"children":[{"name":data[i].city,"children":[{"name":data[i].employe}]}]}); // hierarchy.children.children.push({"name":data[i].city,"children":[]}); } //console.log(hierarchy); } hierarchy.children.shift(); var j=JSON.stringify(hierarchy); //code ends here //here is the json which i seccessfully formed from the code { "name":"Hierarchy", "children":[ { "name":"America", "children":[ { "name":"Kansas", "children":[{"name":"Jacob"},{"name":"Jeen"}]}]}, { "name":"Pakistan", "children":[ { "name":"Lahore", "children": [ {"name":"tahir"},{"name":"bilal"}]}, { "name":"Islamabad", "children":[{"name":"fakhar"}]}, { "name":"Karachi", "children":[{"name":"eden"}]}]}, { "name":"India", "children": [ { "name":"d", "children": [ {"name":"ali"}]}, { "name":"Banglore", "children":[{"name":"PP"},{"name":"JJ"}]}]}]} Now the orignal problem is that currently i am solving this problem for data of array of three keys and i have to go for 3 nested loops now i want to optimize this solution so that if data array of object has more than 3 key say 5 {country :"America", state:"NewYork",city:"newYOrk",street:"elm", employe:'Jacob'}, or more than my solution will not work and i cannot decide before how many keys will come so i thought recursion may suit best here. But i am horrible in writing recurrsion and the case is also complex. Can some awesome programmer help me writing recurrsion or suggest some other solution.

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • Why you shouldn't add methods to interfaces in APIs

    - by Simon Cooper
    It is an oft-repeated maxim that you shouldn't add methods to a publically-released interface in an API. Recently, I was hit hard when this wasn't followed. As part of the work on ApplicationMetrics, I've been implementing auto-reporting of MVC action methods; whenever an action was called on a controller, ApplicationMetrics would automatically report it without the developer needing to add manual ReportEvent calls. Fortunately, MVC provides easy hook when a controller is created, letting me log when it happens - the IControllerFactory interface. Now, the dll we provide to instrument an MVC webapp has to be compiled against .NET 3.5 and MVC 1, as the lowest common denominator. This MVC 1 dll will still work when used in an MVC 2, 3 or 4 webapp because all MVC 2+ webapps have a binding redirect redirecting all references to previous versions of System.Web.Mvc to the correct version, and type forwards taking care of any moved types in the new assemblies. Or at least, it should. IControllerFactory In MVC 1 and 2, IControllerFactory was defined as follows: public interface IControllerFactory { IController CreateController(RequestContext requestContext, string controllerName); void ReleaseController(IController controller); } So, to implement the logging controller factory, we simply wrap the existing controller factory: internal sealed class LoggingControllerFactory : IControllerFactory { private readonly IControllerFactory m_CurrentController; public LoggingControllerFactory(IControllerFactory currentController) { m_CurrentController = currentController; } public IController CreateController( RequestContext requestContext, string controllerName) { // log the controller being used FeatureSessionData.ReportEvent("Controller used:", controllerName); return m_CurrentController.CreateController(requestContext, controllerName); } public void ReleaseController(IController controller) { m_CurrentController.ReleaseController(controller); } } Easy. This works as expected in MVC 1 and 2. However, in MVC 3 this type was throwing a TypeLoadException, saying a method wasn't implemented. It turns out that, in MVC 3, the definition of IControllerFactory was changed to this: public interface IControllerFactory { IController CreateController(RequestContext requestContext, string controllerName); SessionStateBehavior GetControllerSessionBehavior( RequestContext requestContext, string controllerName); void ReleaseController(IController controller); } There's a new method in the interface. So when our MVC 1 dll was redirected to reference System.Web.Mvc v3, LoggingControllerFactory tried to implement version 3 of IControllerFactory, was missing the GetControllerSessionBehaviour method, and so couldn't be loaded by the CLR. Implementing the new method Fortunately, there was a workaround. Because interface methods are normally implemented implicitly in the CLR, if we simply declare a virtual method matching the signature of the new method in MVC 3, then it will be ignored in MVC 1 and 2 and implement the extra method in MVC 3: internal sealed class LoggingControllerFactory : IControllerFactory { ... public virtual SessionStateBehaviour GetControllerSessionBehaviour( RequestContext requestContext, string controllerName) {} ... } However, this also has problems - the SessionStateBehaviour type only exists in .NET 4, and we're limited to .NET 3.5 by support for MVC 1 and 2. This means that the only solutions to support all MVC versions are: Construct the LoggingControllerFactory type at runtime using reflection Produce entirely separate dlls for MVC 1&2 and MVC 3. Ugh. And all because of that blasted extra method! Another solution? Fortunately, in this case, there is a third option - System.Web.Mvc also provides a DefaultControllerFactory type that can provide the implementation of GetControllerSessionBehaviour for us in MVC 3, while still allowing us to override CreateController and ReleaseController. However, this does mean that LoggingControllerFactory won't be able to wrap any calls to GetControllerSessionBehaviour. This is an acceptable bug, given the other options, as very few developers will be overriding GetControllerSessionBehaviour in their own custom controller factory. So, if you're providing an interface as part of an API, then please please please don't add methods to it. Especially if you don't provide a 'default' implementing type. Any code compiled against the previous version that can't be updated will have some very tough decisions to make to support both versions.

    Read the article

  • It's like I'm in recovery mode after update, but I'm not

    - by mawburn
    I used the Ubuntu software updater and updated to the most recent packages. After the last update today, it's like I have gone into recovery mode, but I haven't. I am running UbuntuGNOME First, everything looks like this: Switching to dark mode does nothing. Also, default applications do not work. Such as Startup and the default screenshot application. Everything was working fine before the latest software update. System Info Ubuntu 14.04 LTS Gnome-Shell 3.10.4 Kernel 3.13.0-29 I can't figure out how to get an update history, but this is almost a fresh install. It's about a week old install and this is the 3rd time I've used the Ubuntu Software Update. I am running AMD ATI HD6700 with the proprietary Catalyst drivers. I tried to provide all information that I thought would be useful, if you need any more please let me know. Edit - I believe something went wrong within these updates: Update Log: Start-Date: 2014-06-09 19:07:07 Commandline: aptdaemon role='role-commit-packages' sender=':1.68' Install: libgnome-desktop-3-10:amd64 (3.12.0-0~eugenesan~trusty2) Upgrade: gnome-session-common:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gnome-session-bin:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gir1.2-gnomedesktop-3.0:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), gnome-session:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), python-libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libspice-server1:amd64 (0.12.4-0nocelt2, 0.12.4-0nocelt2.02~eugenesan~trusty1), gir1.2-mutter-3.0:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), xserver-xorg-video-qxl:amd64 (0.1.1-0ubuntu3, 0.1.1-0ubuntu3.01), libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libxml2:i386 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), gnome-desktop3-data:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), mutter:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), mutter-common:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), libxml2-utils:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libmutter0c:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1) End-Date: 2014-06-09 19:07:12 I also installed Citrix Receiver today, following the tutorial here: Citrix Receiver 12.1 on Ubuntu 14.04 64-bit Log Start-Date: 2014-06-09 18:59:06 Commandline: apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386 libxp6:i386 libxpm4:i386 libasound2:i386 Install: libmotif-common:amd64 (2.3.4-5, automatic), libatk1.0-0:i386 (2.10.0-2ubuntu2, automatic), libxft2:i386 (2.3.1-2, automatic), libgraphite2-3:i386 (1.2.4-1ubuntu1, automatic), nspluginviewer:i386 (1.4.4-0ubuntu5, automatic), libpango-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcursor1:i386 (1.1.14-1, automatic), libmotif4:i386 (2.3.4-5), libxm4:amd64 (2.3.4-5, automatic), libxm4:i386 (2.3.4-5, automatic), libxp6:i386 (1.0.2-1ubuntu1), libpangocairo-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcb-render0:i386 (1.10-2ubuntu1, automatic), libthai0:i386 (0.1.20-3, automatic), libharfbuzz0b:i386 (0.9.27-1, automatic), libpixman-1-0:i386 (0.30.2-2ubuntu1, automatic), libpangoft2-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libcairo2:i386 (1.13.0~20140204-0ubuntu1, automatic), lib32z1:amd64 (1.2.8.dfsg-1ubuntu1), libjasper1:i386 (1.900.1-14ubuntu3, automatic), libgtk2.0-0:i386 (2.24.23-0ubuntu1.1, automatic), nspluginwrapper:amd64 (1.4.4-0ubuntu5), libuil4:amd64 (2.3.4-5, automatic), libuil4:i386 (2.3.4-5, automatic), libxcb-shm0:i386 (1.10-2ubuntu1, automatic), libxmu6:i386 (1.1.1-1, automatic), libc6-i386:amd64 (2.19-0ubuntu6), libxinerama1:i386 (1.1.3-1, automatic), libgdk-pixbuf2.0-0:i386 (2.30.7-0ubuntu1, automatic), libxcomposite1:i386 (0.4.4-1, automatic), libmrm4:amd64 (2.3.4-5, automatic), libmrm4:i386 (2.3.4-5, automatic), libdatrie1:i386 (0.2.8-1, automatic), libxrandr2:i386 (1.4.2-1, automatic), libxpm4:i386 (3.5.10-1) End-Date: 2014-06-09 18:59:11

    Read the article

  • HPCM 11.1.2.2.x - HPCM Standard Costing Generating >99 Calc Scipts

    - by Jane Story
    HPCM Standard Profitability calculation scripts are named based on a documented naming convention. From 11.1.2.2.x, the script name = a script suffix (1 letter) + POV identifier (3 digits) + Stage Order Number (1 digit) + “_” + index (2 digits) (please see documentation for more information (http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_admin/apes01.html). This naming convention results in the name being 8 characters in length i.e. the maximum number of characters permitted calculation script names in non-unicode Essbase BSO databases. The index in the name will indicate the number of scripts per stage. In the vast majority of cases, the number of scripts generated per stage will be significantly less than 100 and therefore, there will be no issue. However, in some cases, the number of scripts generated can exceed 99. It is unusual for an application to generate more than 99 calculation scripts for one stage. This may indicate that explicit assignments are being extensively used. An assessment should be made of the design to see if assignment rules can be used instead. Assignment rules will reduce the need for so many calculation script lines which will reduce the requirement for such a large number of calculation scripts. In cases where the scripts generates exceeds 100, the length of the name of the 100th calculation script is different from the 99th as the calculation script name changes from being 8 characters long and becomes 9 characters long (e.g. A6811_100 rather than A6811_99). A name of 9 characters is not permitted in non Unicode applications. It is “too long”. When this occurs, an error will show in the hpcm.log as “Error processing calculation scripts” and “Unexpected error in business logic “. Further down the log, it is possible to see that this is “Caused by: Error copying object “ and “Caused by: com.essbase.api.base.EssException: Cannot put olap file object ... object name_[<calc script name> e.g. A6811_100] too long for non-unicode mode application”. The error file will give the name of the calculation script which is causing the issue. In my example, this is A6811_100 and you can see this is 9 characters in length. It is not possible to increase the number of characters allowed in a calculation script name. However, it is possible to increase the size of each calculation script. The default for an HPCM application, set in the preferences, is set to 4mb. If the size of each calculation script is larger, the number of scripts generated will reduce and, therefore, less than 100 scripts will be generated which means that the name of the calculation script will remain 8 characters long. To increase the size of the generated calculation scripts for an application, in the HPM_APPLICATION_PREFERENCE table for the application, find the row where HPM_PREFERENCE_NAME_ID=20. The default value in this row is 4194304. This can be increased e.g. 7340032 will increase this to 7mb. Please restart the profitability service after making the change.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >