Search Results

Search found 14923 results on 597 pages for 'settings bundle'.

Page 462/597 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • Ubuntu systematically losing wired connection

    - by Lukasz Baczynski
    I'm working on 11.10 for few recent days, everything was working perfect until today. Updated ubuntu (some certs were updates as far as i remember) and from this time, wired network stops working randomly and systematically. (All other pcs/macs work fine) From 192.168.0.9 icmp_seq=25 Destination Host Unreachable From 192.168.0.9 icmp_seq=26 Destination Host Unreachable From 192.168.0.9 icmp_seq=27 Destination Host Unreachable From 192.168.0.9 icmp_seq=28 Destination Host Unreachable From 192.168.0.9 icmp_seq=29 Destination Host Unreachable From 192.168.0.9 icmp_seq=30 Destination Host Unreachable From 192.168.0.9 icmp_seq=31 Destination Host Unreachable 64 bytes from 192.168.0.1: icmp_req=32 ttl=64 time=1003 ms 64 bytes from 192.168.0.1: icmp_req=33 ttl=64 time=0.496 ms 64 bytes from 192.168.0.1: icmp_req=34 ttl=64 time=0.576 ms 64 bytes from 192.168.0.1: icmp_req=35 ttl=64 time=0.522 ms 64 bytes from 192.168.0.1: icmp_req=36 ttl=64 time=0.624 ms 64 bytes from 192.168.0.1: icmp_req=37 ttl=64 time=0.625 ms 64 bytes from 192.168.0.1: icmp_req=38 ttl=64 time=0.555 ms It'll work for 20 seconds then it'll stop working for 10-30sec and so on. I've tried setting my router to give static IPs, it doesn't help. NOTHING has been changed since yesterday beside the package update... Here are other settings that may be useful: baka@baka-PC:~/Private/projects/wduk$ lspci -nnk | grep -iA2 net 06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 06) Subsystem: ASUSTeK Computer Inc. P8P67 and other motherboards [1043:8432] Kernel driver in use: r8169 baka@baka-PC:~/Private/projects/wduk$ ifconfig -a eth0 Link encap:Ethernet HWaddr **Removed MAC address** inet addr:192.168.0.9 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::ca60:ff:fe0a:85b2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6400 errors:0 dropped:6400 overruns:0 frame:6400 TX packets:7085 errors:0 dropped:107 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4191983 (4.1 MB) TX bytes:886881 (886.8 KB) Interrupt:72 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2522 errors:0 dropped:0 overruns:0 frame:0 TX packets:2522 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1070130 (1.0 MB) TX bytes:1070130 (1.0 MB) baka@baka-PC:~/Private/projects/wduk$ cat /etc/resolv.conf # Generated by NetworkManager nameserver 8.8.8.8 thanks for help

    Read the article

  • URL protocol handlers in basic Ubuntu Desktop

    - by Hibou57
    There was a way to register URL protocol handlers with Gconf, which is now obsolete and there seems to be no way to do the same with DConf (or Gsettings, its recommended wrapper). How do one properly register an URL protocol handlers since DConf? Additionally, something looks strange to me (as I don't understand it), on my Ubuntu 12.04 The protocol apt:// should be handled by the apturl command. It is so with my Opera browser, but only because I added this specific association using the browser's configuration facility. Otherwise, in the rest of the environment: Running xdg-open apt://foo.bar opens elinks (my www-browser alternative). Running gnome-open apt://foo.bar opens the Software?Center. Opening gcong-editor, I see a key /desktop/gnome/url-handlers/apt whose value is apturl "%s" and its enable. This configuration seems to be ignored, which is reasonably expected, as GConf is considered obsolete. Opening dconf-editor, I can't see anything related to URL handlers or protocols in /desktop/gnome It looks a bit messy to my eyes (just teasing with this wording, nothing bad) What's underneath? Side note: I'm looking for something which preferably works even when the full desktop environment is not loaded, like when running an i3wm session with only gsettings-daemon (and other stuffs unrelated to this case) is loaded. Update Another way to “register” a protocol handler is with *.desktop files and their MIME-Type; ex. MimeType=application/<the-protocol>;. I found a /usr/share/applications/ubuntu-software-center.desktop with this content: [Desktop Entry] Name=Ubuntu Software Center GenericName=Software Center Comment=Lets you choose from thousands of applications available for Ubuntu Exec=/usr/bin/software-center %u Icon=softwarecenter Terminal=false Type=Application Categories=PackageManager;GTK;System;Settings; MimeType=application/x-deb;application/x-debian-package;x-scheme-handler/apt; StartupNotify=true X-Ubuntu-Gettext-Domain=software-center Keywords=Sources;PPA;Install;Uninstall;Remove;Purchase;Catalogue;Store; This one explains why gnome-open apt://foo.bar opens the Software?Center instead of apturl. So I installed this apturl.desktop in ~/.local/share/applications: [Desktop Entry] Encoding=UTF-8 Version=1.0 Type=Application Terminal=false Exec=/usr/bin/apturl %u Name=APT-URL Comment=APT-URL handler Icon= Categories=Application;Network; MimeType=x-scheme-handler/apt; After update-desktop-database and even after rebooting, both xdg-open and gnome-open still do the same and ignore this user desktop file, which is usual, should override the other in /usr/share/applications/. May be there is something special with desktop files specifying x-scheme-handler MIME type and they are not handled the usual way. The desktop-file way does not answer the question.

    Read the article

  • Ubuntu Sluggish and Graphics Problem after Nvidia Driver Update

    - by iam
    I just recently started using Ubuntu (12.04) since a few weeks ago and noticed that the interface is very slow and sluggish: On Dash, I have to type the entire app name and wait a few seconds before it shows up in the search box, and a bit later before it displays search result Opening new files or applications takes also quite long and awkward Dragging icons or moving app windows around is not very spontaneous too: I have to take extra attention in moving the mouse otherwise Ubuntu would not do a correct movement or might ends up doing something incorrect instead e.g. opening the windows to full screen options or move the file to different folders, which is frustrating My PC is a few years old already (1.7 GB RAM) so this could be a reason too but when I checked in System Monitor it's hardly ever consuming much memory. Plus web-surfing on Firefox is actually lightning fast (much more than Windows), so I suspect there might be something wrong with the graphics driver (mine is GeForce 7050). I checked around System Settings and found an option to update the Nvidia driver. So I tried it and restarted, as instructed. Now, I got into a big problem upon restart... as the login-screen windows (where I have to type in the password) would take several attempts to display and finally did not manage to (it'd freeze for several seconds before there's any movement again). The background screen also kept reloading several times too and at some point the screen turned black with pixelated color strips running on the bottom 1/3 of the screen, and after a long while the background screen would come up again. Eventually I'd manage to be able to access the desktop but the launcher, top menu bar and app windows border would not disappear. I searched around and found many other people have this similar problem after updating Nvidia driver too, and on some threads the suggestion is to use "killall -u $USER" in command line (it's the only thing among various online suggestions I could do, as at that point I could not access Terminal without the launcher - Ctrl-Alt-T doesn't work for me). So I did that and was able to access the desktop correctly again with launchee/menu by creating a new account. But I would still have the same problem if logging into my original account. So I just finally tried upgrading to 12.10 and now can access my original account with fully-functional desktop - the launcher, menu and windows border are all back now. However, the problem with sluggishness still remains. And now I get scared of ever having to update the Nvidia driver again! I wonder if anyone knows what's the reason that updating the Nvidia driver is causing this problem and is there a way I can update it safely in the future? I'm still not sure how to solve the problem with the sluggishness too and not sure where else to look to find a solution.

    Read the article

  • Correcting color-shifted mirrored i915 driver in 12.04?

    - by Will Martin
    I was called in to fix a friend's malfunctioning HP Pavilion. She's not sure exactly which model, but the sticker on the bottom says "G60". The problem was a failed upgrade to 12.04. I was able to mostly repair it with sudo apt-get -f install, which ran setup and configuration for several hundred packages. The biggest problem at the moment is Xorg. The login screen (lightdm) loads normally but at a reduced resolution (1024x768 instead of 1366x768). But once you log in, it looks like this: Observe that the colors of the dock on the left and the bar at the top are normal. But the background is filled with bizarro color-skewed ghost images of the desktop. In all cases, the actual contents of any programs you run is a totally illegible mess, except that the bar at the top of any program windows looks and acts normally. And the ghost images are interactive! For example, if you click the icon in the top right corner to get the "shut down" menu, the same menu will appear in the ghost images below. Starting a terminal will start a terminal window in both the real desktop and the ghost images, and moving it around updates both the real and ghost desktops. I suspect Xorg is using some kind of wrong driver and/or parameter for the graphics hardware. Here is the graphics-relevant portion of the lspci -v output: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 09) Subsystem: Hewlett-Packard Company Device 360b Flags: bus master, fast devsel, latency 0 Capabilities: [e0] Vendor Specific Information: Len=0a <?> Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) Subsystem: Hewlett-Packard Company Device 360b Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at d0000000 (64-bit, non-prefetchable) [size=4M] Memory at c0000000 (64-bit, prefetchable) [size=256M] I/O ports at 5110 [size=8] Expansion ROM at <unassigned> [disabled] Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [d0] Power Management version 3 Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) Subsystem: Hewlett-Packard Company Device 360b Flags: bus master, fast devsel, latency 0 Memory at d2500000 (64-bit, non-prefetchable) [size=1M] Capabilities: [d0] Power Management version 3 I'm not sure what to check next. I would ordinarily check xorg.conf to see what it says, but that apparently doesn't exist any more, and my googling has not yielded any useful techniques for getting Xorg to tell me what settings it decided to use. The weird part is that it works fine on the login screen. It's only when you actually log in as a user that the display gets screwed up. Suggestions?

    Read the article

  • SQL SERVER – Backup SQL databases to Box or SkyDrive

    - by Pinal Dave
    To ensure your SQL Server or Azure databases remain safe, you should backup your databases periodically. And it is important to store the backups in a reliable location. Microsoft SkyDrive currently offers 7GB free, Box offers 5GB free – both are reliable and it is simple to send your backups there. SQLBackupAndFTP in it’s latest version 9 added the option to backup to SkyDrive and Box ( in addition to local/network folder, NAS drive, FTP, Dropbox, Google Drive and Amazon S3). Just select the databases that you’d like to backup and select to store the backups in SkyDrive or Box. Below I will show you how to do it in details Select databases to backup First connect to your SQL Server or Azure Sql Database. Then select the databases you’d like to backup. Connect to SkyDrive or Box cloud If you have a free version of SQLBackupAndFTP Box destination is included, but SkyDrive destination will be disabled as it is available in the Standard version or above. Click “Try now” to get 30 days trial on all options On the “SkyDrive Settings” form you’ll need to authorize SQLBackupAndFTP to access your SkyDrive. Click “Authorize…” to open SkyDrive authorization page in your browser, sign in your to SkyDrive account and click at “Allow” . On the next page you will see the field with authorization code. Copy it to the clipboard. Box operation is just the same. After that return to SQLBackupAndFTP, paste the authorization code and click “OK” . After you are authorized, you can enter the path to a backup folder. SQLBackupAndFTP will create the folder if it does not exist. That’s all what has to be done to backup to SkyDrive or Box cloud.  You can now click on “Run Now” button to test this job. Conclusion Whatever is your preference for storing SQL backups, it is easy with SQLBackupAndFTP. Note that at the time of this writing they are running a very rare promotion on volume licenses: 5–9 licenses: 20% off 10–19 licenses: 35% off more than 20 licenses: 50% off Please let me know your favorite options for storing the backups. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Difference between the terms Material & Effect

    - by codey
    I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States & Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen-space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in-game-situation (e.g. night/day, power-up-mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, & passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, & settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, & selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types/shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material/effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, & each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), & don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework & COLLADA definitions closely.

    Read the article

  • More on PHP and Oracle 11gR2 Improvements to Client Result Caching

    - by christopher.jones
    Oracle 11.2 brought several improvements to Client Result Caching. CRC is way for the results of queries to be cached in the database client process for reuse.  In an Oracle OpenWorld presentation "Best Practices for Developing Performant Application" my colleague Luxi Chidambaran had a (non-PHP generated) graph for the Niles benchmark that shows a DB CPU reduction up to 600% and response times up to 22% faster when using CRC. Sometimes CRC is called the "Consistent Client Cache" because Oracle automatically invalidates the cache if table data is changed.  This makes it easy to use without needing application logic rewrites. There are a few simple database settings to turn on and tune CRC, so management is also easy. PHP OCI8 as a "client" of the database can use CRC.  The cache is per-process, so plan carefully before caching large data sets.  Tables that are candidates for caching are look-up tables where the network transfer cost dominates. CRC is really easy in 11.2 - I'll get to that in a moment.  It was also pretty easy in Oracle 11.1 but it needed some tiny application changes.  In PHP it was used like: $s = oci_parse($c, "select /*+ result_cache */ * from employees"); oci_execute($s, OCI_NO_AUTO_COMMIT); // Use OCI_DEFAULT in OCI8 <= 1.3 oci_fetch_all($s, $res); I blogged about this in the past.  The query had to include a specific hint that you wanted the results cached, and you needed to turn off auto committing during execution either with the OCI_DEFAULT flag or its new, better-named alias OCI_NO_AUTO_COMMIT.  The no-commit flag rule didn't seem reasonable to me because most people wouldn't be specific about the commit state for a query. Now in Oracle 11.2, DBAs can now nominate tables for caching, either with CREATE TABLE or ALTER TABLE.  That means you don't need the query hint anymore.  As well, the no-commit flag requirement has been lifted.  Your code can now look like: $s = oci_parse($c, "select * from employees"); oci_execute($s); oci_fetch_all($s, $res); Since your code probably already looks like this, your DBA can find the top queries in the database and simply tune the system by turning on CRC in the database and issuing an ALTER TABLE statement for candidate tables.  Voila. Another CRC improvement in Oracle 11.2 is that it works with DRCP connection pooling. There is some fine print about what is and isn't cached, check the Oracle manuals for details.  If you're using 11.1 or non-DRCP "dedicated servers" then make sure you use oci_pconnect() persistent connections.  Also in PHP don't bind strings in the query, although binding as SQLT_INT is OK.

    Read the article

  • RIM's current BB7 developer toolset is a joke

    - by mbrit
    tl;dr - RIM's current developer toolset is not fit for purpose.Background to this is that I'm currently working on a PhoneGap/Cordova project for a client that has to run on BlackBerry. The tooling is so ridiculous to use that even though I had a gentle dig at them in a Guardian piece it's worth having a more full-on attack.At the moment, RIM's pitch is that apps are built for the current BBOS7 devices using WebWorks. This is an HTML-based toolset. Essentially a browser is spun up in a native app container and your app is powered by JavaScript. Specific JavaScript libraries exist that thunk down to native capabilities no the device. I happen to use PhoneCap/Cordova in combination with this.The tooling is non-existent. I'm using TextMate, Ant, and Terminal to develop the app. There's no "console.log" output, and no debugging. The only way to instrument the app is to put "alert" calls in your code.Apart from the fact that that's *not* fine in 2012, how about this… every time you deploy a new app to the device, the device has to reboot. This process takes six minutes on a relatively modern BlackBerry device. How about this as well - in order to get a file into the package it has to be signed. My small app over here has 100 different files (75 or so generated). Signing doesn't happen locally, it happens on RIM's servers in Waterloo. Thus whenever you deploy the app you have this utility have to call RIM's servers 100 times. More to the point, sometimes during the day these servers have "micro-downtime" moments where they're unreachable for five or ten minutes, normally two or three times a day. Oh yes, you'll also get an email sent to you per signing on success or failure. 100 inbound emails, per deployment.(I started this post at the beginning of one of these cycles, by the way. That's how long it takes to build and deploy *once*. By the way, the change I made didn't work.)To clarify:* Change the script,* Build it using Ant,* Ant will spin up a Java app that talks to RIM's servers to sign it.* Receive 100 emails, assuming the server is up.* App deployed - takes about 30 seconds.* BlackBerry device restarts - takes about six minutes.* Find and open the app. Go through security prompts.* Test the app, with no "console.log" output and no debugger."Why not use the simulator?" I hear you ask. Well, apart from the fact that the simulator refused to reach any network service over HTTPS that I happen to own? (Some people suggest changing DNS settings for this known issue.) Admittedly, the simulator does show you console.log, but you still have the "six minute" restart issue on the simulator.Developers will understand this problem. Breaking concentration for six-plus minutes every time you want to deploy an app turns developing into a nightmare. Combining that with no worthy debugging tools turns the toolset into a joke.

    Read the article

  • Recovering an Ubuntu installation - Ubuntu eats itself after 'sudo apt-get install -f'

    - by Tony Martin
    Updater (I assume) put a no entry style alert icon on the panel which informed me that certain package dependencies were not up to snuff. Upgrades were thereafter only partial. The dialogue advised that I sudo apt-get install -f. I did this hoping that app-get would fulfil dependencies and replace corrupted files and watched it systematically remove every component of linux, both the stuff I had installed and the core ubuntu packages. I could only assume at this stage that this was in preparation for a fresh install but, of course, I know better now - if you find yourself with apt-get warning you that you are about to remove several hundred packages and asking you to type an involved confirmation string seek advice before proceeding. I digress. This was a 64 bit install of 12.04. All that is left is grub pointing to a couple of windows recovery partitions on the hard drive. Thankfully the Ext4 partition is reachable from a stick boot. EDIT: I've logged onto the machine with a 64 bit stick and can see the file structure left behind by apt-get after {ahem} fixing. My first instinct was to run install from the stick but it seemed to want to do another install rather than a repair. My question then: is there a way to recover the current installation so that if I reinstall the packages I had they will pick up the original settings? I'm particularly worried about losing email from evolution - the rest I could probably lash back together. As for the use of PPA I'm not sure what you're driving at. I generally use Ubuntu Software Centre to install software, though I have used terminal scripts to add new repositories and software successfully following guidance on various websites. The most recent change I made was a downgrade of Wine in an attempt to install and run excel2007 (a necessity, I think, as I have VBA work to do). The installer had stalled and had to be killed. I wonder if that corrupted whatever database holds a model of the package installation structure. I would also be interested to know how this disaster came about. I see people in the know recommending the sudo apt-get install -f as a fairly innocuous cure in similar circumstances. Thanks for your attention, Tony Martin p.s. Do please forgive the rant aspects of the original post. It's hard to write rationally with a large hole in the pit of your stomach.

    Read the article

  • Need help fixing DPKG errors after update from 12.04 to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" It should also be noted that it is just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy. Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I would also like to note i have cleaned apt cashe both through the terminal and manualy, i have tried installing them manually through dpkg from both the /var/cache/apt/archives/ location and from my own manually downloaded .deb files. i have tried using dpkg-reconfigure and i have used bleachbit to clean my system. I have also tested both my HDD and memory and found no significant errors to lead to the input/output errors. Quite frankly i am just out of options and have grown tired of trying to google a solution to this mess but still do not wish to pursue backing up settings and reinstalling the system. Any help would be appreciated. I am only interested in answers, please leave your feeling towards grammar, punctuation, and bias towards how a "post should look" at the door. If you dont have something to contribute towards solving my problem then you are just doing nothing but contributing to it. Thank you.

    Read the article

  • SharePoint 2010 Hosting :: How to Enable Office Web Apps on SharePoint 2010

    - by mbridge
    Office Web App is the online version of Microsoft Office 2010 which is very helpful if you are going to use SharePoint 2010 in your organization as it allows you to do basic editing of word document without installing the Office Suite in the client machine. Prerequisites : - Microsoft Server 2008 R2 - Microsoft SharePoint Server 2010 or Microsoft SharePoint Foundation 2010 - Microsoft Office Web Apps. If you have installed all the above products, just follow this steps: 1. Go to Central Administration > Click on Manage Service Application. 2. All the menus are not displayed in ribbon Menu format which was first introduced in Office 2007. Click on New > Word Viewing Services ( You can choose PowerPoint or Excel also, steps are same ). This will open a pop window. Adding Services for Office Web Apps 3. Give a Proper Name which can have your companies or project name. 4. Under Application Pool select : SharePoint Web Services Default. 5. Next keep the check box checked which says : Add this service application’s proxy to the farm’s default proxy list. Click Ok Adding Word Viewer as Service Application Office Web Apps as Services in Sharepoint 2010 6. This will install all the Office Web App services required. You can see the name as you gave in the above step. How to Activate Office Web Apps in Site Collection? 1. Go to the site for which you want to activate this feature. 2. Click on Site Action > Site Settings > Site Collection Administrator > Site Collection Features 3. Activate Office Web Apps. Activate Office Web Apps Feature in Site Collection How to make sure Office Web Apps is working for your site collection? 1. Locate any office document you have and click on the smart menu which appears when you hover your mouse on it. Dont double-click as this will launch the document in Office Client if its installed. This feature can be changed. 2. If you see View or Edit in Browser as menu item, your Office Web Apps is configured correctly. View Edit Office Document in Browser Editing Office Document in Browser Another post related SharePoint 2010: 1. How to Configure SharePoint Foundation 2010 for SharePoint Workspace 2010 2. Integrating SharePoint 2010 and SQL 2008 R2

    Read the article

  • Visual studio Real Dark mode (2010,2012,2013)

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/02/visual-studio-real-dark-mode-201020122013.aspxWhen Visual studio 2010 released back in 3 year ago I soon show a demo to some people that how Dark mode of Visual studio will be great idea. Soon we got some theme plugin  which make us able to modify the look of visual studio.   http://studiostyl.es/ already provide lots of wonderful color scheme that make you able to modify the theme. These themes are also work in webmatrix 2.  Webmatrix 2 have a plugin for themes that is made by Yishai Galatzer that is awesome for webmatrix 2.   In Visual studio 2012 we got a native dark mode. This means we can configure it without any plugin or requirement of anything. In this post I have a demo to show you how to use Dark mode that is part of Windows 7 (and windows 8 too).   Few months ago I show a problem that webmatrix 2 run slow. it’s run better in windows 7 dark mode. Windows 7 dark mode simply refer to right click > personalize > High contrast theme in bottom of windows. This setting make thing a little bit faster.   When you have set this you have seen that Visual studio doesn’t react good anymore because it’s color scheme is broken now. What you need now is import any theme from http://studiostyl.es/ When you import this this will look good as this.   This is the demo look of Windows 7 phone Express 2010. It will react same for future version as 2012, 2013. Now see your VS react look dark. Everything is dark now. Your Firefox and IE will not run totally in blackish mode but you can use chrome. Chrome have less effect of dark. Now if you benchmark it then you will feel that everything that take a long time in loading now run fast.   Note :- This is experiments. Remember to have settings backup before apply new theme. All thing I do is make my VS run faster. If you have any trouble or idea please comment it.   Thanks for read my post

    Read the article

  • Migrating Core Data to new UIManagedDocument in iOS 5

    - by samerpaul
    I have an app that has been on the store since iOS 3.1, so there is a large install base out there that still uses Core Data loaded up in my AppDelegate. In the most recent set of updates, I raised the minimum version to 4.3 but still kept the same way of loading the data. Recently, I decided it's time to make the minimum version 5.1 (especially with 6 around the corner), so I wanted to start using the new fancy UIManagedDocument way of using Core Data. The issue with this though is that the old database file is still sitting in the iOS app, so there is no migrating to the new document. You have to basically subclass UIManagedDocument with a new model class, and override a couple of methods to do it for you. Here's a tutorial on what I did for my app TimeTag.  Step One: Add a new class file in Xcode and subclass "UIManagedDocument" Go ahead and also add a method to get the managedObjectModel out of this class. It should look like:   @interface TimeTagModel : UIManagedDocument   - (NSManagedObjectModel *)managedObjectModel;   @end   Step two: Writing the methods in the implementation file (.m) I first added a shortcut method for the applicationsDocumentDirectory, which returns the URL of the app directory.  - (NSURL *)applicationDocumentsDirectory {     return [[[NSFileManagerdefaultManager] URLsForDirectory:NSDocumentDirectoryinDomains:NSUserDomainMask] lastObject]; }   The next step was to pull the managedObjectModel file itself (.momd file). In my project, it's called "minimalTime". - (NSManagedObjectModel *)managedObjectModel {     NSString *path = [[NSBundlemainBundle] pathForResource:@"minimalTime"ofType:@"momd"];     NSURL *momURL = [NSURL fileURLWithPath:path];     NSManagedObjectModel *managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL];          return managedObjectModel; }   After that, I need to check for a legacy installation and migrate it to the new UIManagedDocument file instead. This is the overridden method: - (BOOL)configurePersistentStoreCoordinatorForURL:(NSURL *)storeURL ofType:(NSString *)fileType modelConfiguration:(NSString *)configuration storeOptions:(NSDictionary *)storeOptions error:(NSError **)error {     // If legacy store exists, copy it to the new location     NSURL *legacyPersistentStoreURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTime.sqlite"];          NSFileManager* fileManager = [NSFileManagerdefaultManager];     if ([fileManager fileExistsAtPath:legacyPersistentStoreURL.path])     {         NSLog(@"Old db exists");         NSError* thisError = nil;         [fileManager replaceItemAtURL:storeURL withItemAtURL:legacyPersistentStoreURL backupItemName:niloptions:NSFileManagerItemReplacementUsingNewMetadataOnlyresultingItemURL:nilerror:&thisError];     }          return [superconfigurePersistentStoreCoordinatorForURL:storeURL ofType:fileType modelConfiguration:configuration storeOptions:storeOptions error:error]; }   Basically what's happening above is that it checks for the minimalTime.sqlite file inside the app's bundle on the iOS device.  If the file exists, it tells you inside the console, and then tells the fileManager to replace the storeURL (inside the method parameter) with the legacy URL. This basically gives your app access to all the existing data the user has generated (otherwise they would load into a blank app, which would be disastrous). It returns a YES if successful (by calling it's [super] method). Final step: Actually load this database Due to how my app works, I actually have to load the database at launch (instead of shortly after, which would be ideal). I call a method called loadDatabase, which looks like this: -(void)loadDatabase {     static dispatch_once_t onceToken;          // Only do this once!     dispatch_once(&onceToken, ^{         // Get the URL         // The minimalTimeDB name is just something I call it         NSURL *url = [[selfapplicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTimeDB"];         // Init the TimeTagModel (our custom class we wrote above) with the URL         self.timeTagDB = [[TimeTagModel alloc] initWithFileURL:url];           // Setup the undo manager if it's nil         if (self.timeTagDB.undoManager == nil){             NSUndoManager *undoManager = [[NSUndoManager  alloc] init];             [self.timeTagDB setUndoManager:undoManager];         }                  // You have to actually check to see if it exists already (for some reason you can't just call "open it, and if it's not there, create it")         if ([[NSFileManagerdefaultManager] fileExistsAtPath:[url path]]) {             // If it does exist, try to open it, and if it doesn't open, let the user (or at least you) know!             [self.timeTagDB openWithCompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Opened the file--it already existed");                     [self refreshData];                 }             }];         }         else {             // If it doesn't exist, you need to attempt to create it             [self.timeTagDBsaveToURL:url forSaveOperation:UIDocumentSaveForCreatingcompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Created the file--it did not exist");                     [self refreshData];                 }             }];         }     }); }   If you're curious what refreshData looks like, it sends out a NSNotification that the database has been loaded: -(void)refreshData {     NSNotification* refreshNotification = [NSNotificationnotificationWithName:kNotificationCenterRefreshAllDatabaseData object:self.timeTagDB.managedObjectContext  userInfo:nil];     [[NSNotificationCenter defaultCenter] postNotification:refreshNotification];     }   The kNotificationCenterRefreshAllDatabaseData is just a constant I have defined elsewhere that keeps track of all the NSNotification names I use. I pass the managedObjectContext of the newly created file so that my view controllers can have access to it, and start passing it around to one another. The reason we do this as a Notification is because this is being run in the background, so we can't know exactly when it finishes. Make sure you design your app for this! Have some kind of loading indicator, or make sure your user can't attempt to create a record before the database actually exists, because it will crash the app.

    Read the article

  • Why is my system freezing when I switch users

    - by ZeroDivide
    Hello I've recently upgraded from 13.04 to 13.10 64bit. I'm running AMD graphics with the proprietary drivers. I have two user accounts. Mine(administrator) and my girlfriend's(standard) My girlfriend clicks "switch user" from my lock screen and logs in fine. I then try to click "switch user" from her lock screen and everything goes black. Then the monitor blinks on and off with just a single cursor. I have no way to access the terminal, the system is unresponsive and I have to hit the power button. Even ctrl + alt + f4 or ctrl + alt + t doesn't get me a terminal. When I press the power button on my system, it does start printing out the shutdown sequence on the monitor. Here is my .xsession-errors Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. Here is hers: init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd respawning too fast, stopped init: logrotate main process (4726) killed by TERM signal init: upstart-dbus-session-bridge main process (4865) terminated with status 1 init: gnome-settings-daemon main process (4843) terminated with status 1 init: gnome-session main process (4852) terminated with status 1 init: unity-panel-service main process (4863) killed by KILL signal I found some advice in a forum to look for at-spi2-registryd in my system logs. Perhaps it will be useful. executing this: sudo grep -r at-spi2-registryd /var/log/* produces this: /var/log/lightdm/x-1-greeter.log:** (at-spi2-registryd:4384): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-1-greeter.log:** (at-spi2-registryd:4384): WARNING **: Unable to register client with session manager /var/log/lightdm/x-2-greeter.log.old:** (at-spi2-registryd:7447): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-2-greeter.log.old:** (at-spi2-registryd:7447): WARNING **: Unable to register client with session manager /var/log/lightdm/x-0-greeter.log:** (at-spi2-registryd:1378): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-0-greeter.log:** (at-spi2-registryd:1378): WARNING **: Unable to register client with session manager /var/log/lightdm/x-0-greeter.log.old:** (at-spi2-registryd:1357): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-0-greeter.log.old:** (at-spi2-registryd:1357): WARNING **: Unable to register client with session manager Any ideas what is going on?

    Read the article

  • Misaligned Display on Resume

    - by Shaun Killingbeck
    I have an odd issue with my laptop display when resuming from suspend. When I have an additional monitor connected there is no issue. However without an additional monitor connected, after resuming only the left 10% of the laptop screen (just enough to show the Unity Launcher and a bit more) is visibly working, although strangely in a screenshot this same 10% is shown on the right hand side of the screenshot: I ran xrandr --verbose before and after resume, and the only difference (using diff) was: 2c2 < LVDS connected 1366x768+0+0 (0x98) normal (normal left inverted right x axis y axis) 344mm x 194mm --- > LVDS connected 1366x768+1280+0 (0x98) normal (normal left inverted right x axis y axis) 344mm x 194mm This seems to suggest the screen position has been shifted by 1280 horizontally, the width of the second monitor I use. Indeed, running the command xrandr --output LVDS --pos 0x0 does bring the screen back to normal. However, I don't want to have to run this command every time, I'd prefer to cure the source of the problem than just correct the symptoms. Any ideas on how to get Ubuntu to keep the display configuration settings from before suspend when it resumes? or why it changes at all? Heres some technical details that might be pertinent: HP Pavilion DV6 Laptop Ubuntu 13.04 AMD Radeon HD 6400M Series AMD Radeon HD 6520G Using proprietary flgrx-updates driver and amdcccle (Catalyst Control Center) (Unfortunately the open source driver causes my laptop to run even hotter than it already does, otherwise I'd use that) The contents of Xorg.conf: Section "ServerLayout" Identifier "amdcccle Layout" Screen 0 "amdcccle-Screen[0]-0" 0 0 EndSection Section "Module" Load "glx" EndSection Section "Monitor" Identifier "0-LVDS" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x768" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x768" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "1-LVDS" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "TargetRefresh" "60" Option "Position" "1280 0" Option "Rotate" "normal" Option "Disable" "false" Option "PreferredMode" "1366x768" EndSection Section "Monitor" Identifier "1-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" Option "PreferredMode" "1280x1024" EndSection Section "Device" Identifier "amdcccle-Device[0]-0" Driver "fglrx" Option "Monitor-LVDS" "1-LVDS" Option "Monitor-CRT1" "1-CRT1" BusID "PCI:0:1:0" EndSection Section "Device" Identifier "amdcccle-Device[0]-1" Driver "fglrx" Option "Monitor-LVDS" "1-LVDS" BusID "PCI:0:1:0" Screen 1 EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Screen" Identifier "amdcccle-Screen[0]-0" Device "amdcccle-Device[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Virtual 2646 2646 Depth 24 EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[0]-1" Device "amdcccle-Device[0]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • HDMI sound gone, can't figure out how to turn it back on

    - by Oli
    I have had an Acer Revo box as a media centre for a while. I recently installed Ubuntu Server (10.10) on it and polished it up with nodm (one of the most simple ways to launch an X session) and installed boxee. It's been working fine for over a month. It's just running ALSA. I've had problems with PulseAudio/Boxee/HDMI before so I wanted to keep it simple. And that worked. It pushed both PCM and digital (AAC and various Dolby codecs) over HDMI perfectly. But I restarted it the other day after mucking around with some nfs configuration and now there isn't any sound. The hardware is an ION chipset. Nvidia 9400M graphics with Nvidia MCP79/7A audio. One thing I have noticed is there doesn't appear to be any sign of a IEC958 device. A traditional fix in the past for fresh installs has been to load alsamixer, find the IEC device and toggle its mute but I can't. I'm certain this used to represent the HDMI output. It just doesn't seem to exist any more unless I run sudo alsa-utils restart while boxee is running, when I see it in an error message: * Shutting down ALSA... [ OK ] * Setting up ALSA... * warning: 'alsactl restore' failed with error message 'alsactl: set_control:1388: Cannot write control '2:0:0:IEC958 Playback Default:0' : Operation not permitted'... ...done. When nodm (and thus boxee) aren't running, I don't see this error but alsamixer still doesn't show the IEC channel. aplay -l gives: card 0: NVidia [HDA NVidia], device 0: ALC662 rev1 Analog [ALC662 rev1 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 0/1 Subdevice #0: subdevice #0 Its section in lshw reads: *-multimedia description: Audio device product: MCP79 High Definition Audio vendor: nVidia Corporation physical id: 8 bus info: pci@0000:00:08.0 version: b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list configuration: driver=HDA Intel latency=0 maxlatency=5 mingnt=2 resources: irq:22 memory:fae78000-fae7bfff I was running on the stock PAE kernel but now it's running on 2.6.37.1. I upgraded to see if that fixed things; it didn't. I'm considering a reinstall but I hate doing that because a) there's a bit of custom configuration in getting X and Boxee to start on boot and b) I don't know what the problem is. If I reinstall this time, I'll end up doing that every time the sound breaks. I love Ubuntu but I don't want to install it once a month. Is there any way to forcibly reset all alsa settings and restart from scratch (without doing a reinstall)? Any other tips? If you need more information, just ask.

    Read the article

  • Office 2010 Professional Plus (Top 10 reasons to upgrade)

    - by mbcrump
    Being a huge nerd, I decided that I would go ahead and upgrade to the latest and greatest office. That being, Office 2010 Professional Plus. The biggest concern that I had was loosing all my mail settings from Outlook 2007. Thankfully, it upgrade gracefully and worked like a charm. So lets start this top 10 list. 1) You can upgrade without fear of loosing all your stuff! As you can tell by the screenshot below, you can select what you want to do. I selected to remove all previous versions.    2) Outlook conversations: Just like GMail, you can now group emails by conversations. This is simply awesome and a must have. 3) The ability to ignore conversations. If you are on a email thread that has nothing to do with you. Simply “ignore” the conversation and all emails go into the deleted folder. 4) Quick Steps, do you send an email to the same team member or group constantly. With quick steps, its just one click away. 5) Spell check in the Subject line! 6)  Easier Screenshots, built in just click the button. No more ALT-Printscreen for those that are not aware of the awesome SnagIT 10 that's out. 7) Open in protected view. When you open a document from an email attachment, it lets you know the file may be unsafe. You can click a button to enable editing. This is great for preventing macros.       8) Excel has always had a variety of charts and graphs available to visually depict data and trends. With Excel 2010, though, Microsoft has added a new feature called Sparklines, which allows you to place a mini-graph or trend line in a single cell. The Sparklines are a cool way to quickly and simply add a visual element without having to go through the effort of inserting a graph or chart that overwhelms the worksheet. 9) Contact actions. If you hover over a name in the form or fields on an email, you get a popup giving you several actions you can perform on the person such as adding them to your Outlook contacts, scheduling a meeting, viewing their stored contact information if they are already in your contacts, sending an instant message or even starting a telephone call. 10) Windows 7 Task Bar Context Menu – I love the jumplist. I don’t know how much that I would actually use it but it just rocks.

    Read the article

  • Base Pages and Interfaces for ASP.NET Pages

    - by geekrutherford
    For quite a while I have been using the concept of base pages when developing pages in ASP.NET applications. It is a wonderful method for exposing common functions to all of your applications pages and also overriding certain events for various purposes (i.e. dynamic themes).  Recently I found out a new developer will be joining my team. This prompted me to review the applications code for readability and ease of maintenance. I began adding comments through out the code behind for all pages within the application. While doing so I noted that I had used common method names for such things as loading data, configuring controls, applying filters, etc.   Bringing a new developer on board, I wanted to make the transition as seamless as possible while also ensuring they follow existing coding practices we already have in place. While I could have created virtual methods for the common page methods allowing them to overridden, what I really needed was a way to ensure the new developer implemented the same methods for each and every page. Thus I created an interface to force the issue.   Now, every page not only inherits the base page class but also implements an interface. This provides every page not only common functions and overridden page events but also imposes rules for implementing certain common methods :-)   Interface   public interface BasePageInterface { /// Configures page based on users security permissions. void CheckPermissions(); /// Configures Filter Form control for current page.  /// Ensure you have set the FilteredGrid and PageAjaxManager properties of the FilterForm control in PageLoad!!!  void ConfigureFilters(); /// Sets event handlers and default settings for controls on the current page. void ConfigureControls(); /// Exports data bound to grid in selected format. void ExportGridData(ExportFormat fmt); /// Loads data and binds to grid. /// Columns are turned on/off in grid depending on tab selected and users permissions.  void LoadData(); }   Page code-behind class definition:   public partial class MyPage : BasePage, BasePageInterface Note, you could not use an abstract class to accomplish this considering C# does not allow for multiple inheritance.  Nor could the base page class be abstract since it needs to inherit from the System.Web.UI.Page class in order to override page events.

    Read the article

  • Partial Submit vs. Auto Submit

    - by Frank Nimphius
    Partial Submit ADF Faces adds the concept of partial form submit to JavaServer Faces 1.2 and beyond. A partial submit actually is a form submit that does not require a page refresh and only updates components in the view that are referenced from the command component PartialTriggers property. Another option for refreshing a component in response to a partial submit is call AdfContext.getCurrentInstance.addPartialTarget(component_instance_handle_goes_here)in a managed bean. If a form contains required fields that the user left empty invoking the partial submit, then errors are shown for each of the field as the full form gets submitted. Autosubmit An input component that has its autosubmit property set to true also performs a partial submit of the form. However, this time it doesn't submit the entire form but only the component that triggers the submit plus components referenced it in their PartialTriggers property. For example, consider a form that has three input fields inpA, inpB and inpC with autosubmit=true set on inpA and required=true set on inpB and inpC. use case 1: Running the view, entering data into inpA and then tabbing out of the field will submit the content for inpA but not for inpB and inpC. Further more, none of the required field settings on inpB and inpC causes an error. use case 2: You change the configuration of inpC and set its PartialTriggers property to point to the ID of component inpA. When rerunning the sample, entering a value into inpA and tabbing out of the field will now submit the inpA and inpC fields and thus show an error for the missing required value on inpC. Internally, using autosubmit=true on an input component sets the event root to just this field, which good to have in case of dependent field validation or behavior. The event root can extended to include other components by using the Partial Triggers property on these components to point to the input field that has autosubmit=true defined. PartialSubmit vs. AutoSubmit Partial submit set on a command component submits the whole form and leaves it to the developer to decide which UI component is refreshed in response. Client side required field validation (as well as the server side equivalent) is not disabled by executed in this scenario. Setting immediate=true on the command item to skip validation doesn't help as it would also skip the model update. Auto submit is a functionality on the input components and also performs a partial form submit. However, in addition an event root is defined that narrows the scope for the submitted data and thus the components that are validated on the request. To read more about this topic, see: http://docs.oracle.com/cd/E23943_01/web.1111/b31973/af_lifecycle.htm#CIAHCFJF

    Read the article

  • Setting up port forwarding for 7000 appliance VM in VirtualBox

    - by uejio
    I've been using the 7000 appliance VM for a lot of testing lately and relied on others to set up the networking for the VM for me, but finally, I decided to take the dive and do it myself.  After some experimenting, I came up with a very brief number of steps to do this all using the VirtualBox CLI instead of the GUI. First download the VM image and unpack it somewhere.  I put it in /var/tmp. Then, set your VBOX_USER_HOME to some place with lots of disk space and import the VM: export VBOX_USER_HOME=/var/tmp/MyVirtualBoxVBoxManage import /var/tmp/simulator/vbox-2011.1.0.0.1.1.8/Sun\ ZFS\ Storage\ 7000.ovf (go get a cup of tea...) Then, set up port forwarding of the VM appliance BUI and shell:First set up port as NAT:VBoxManage modifyvm Sun_ZFS_Storage_7000 --nic1 nat Then set up rules for port forwarding (pick some unused port numbers):VBoxManage modifyvm Sun_ZFS_Storage_7000 --natpf1 "guestssh,tcp,,4622,,22"VBoxManage modifyvm Sun_ZFS_Storage_7000 --natpf1 "guestbui,tcp,,46215,,215" Verify the settings using:VBoxManage showvminfo Sun_ZFS_Storage_7000 | grep -i nic Start the appliance:$ VBoxHeadless --startvm Sun_ZFS_Storage_7000 & Connect to it using your favorite RDP client.  I use a Sun Ray, so I use the Sun Ray Windows Connector client: $ /opt/SUNWuttsc/bin/uttsc -g 800x600 -P <portnumber> <your-hostname> & The portnumber is displayed in the output of the --startvm command.(This did not work after I updated to VirtualBox 4.1.12, so maybe at this point, you need to use the VirtualBox GUI.) It takes a while to first bring up the VM, so please be patient. The longest time is in loading the smf service descriptions, but fortunately, that only needs to be done the first time the VM boots.  There is also a delay in just booting the appliance, so give it some time. Be sure to set the NIC rule on only one port and not all ports otherwise there will be a conflict in ports and it won't work. After going through the initial configuration screen, you can connect to it using ssh or your browser: ssh -p 45022 root@<your-host-name> https://<your-host-name>:45215 BTW, for the initial configuration, I only had to set the hostname and password.  The rest of the defaults were set by VirtualBox and seemed to work fine.

    Read the article

  • No suspend on lid closing on a Samsung Series 5 14" NP530U4BI

    - by dmeu
    Ok, i realize I am not the only one, but I will try to provide all info possible to make it exemplary as possible and narrow down the error sources. I have a fresh install of Ubuntu 12.04 and the suspend worked fine upon having it freshly installed but now it does not anymore. The suspend option from the system power button on the top right works fine. Things I did do which I don't know if they are related: Install and remove againthe FGLRX drivers (Radeon graphic card) Install Jupiter power managment (shutting it down is not changin anything) Plug in and out an external display The configuration I know of is well set: In System Settings/Power all is set to suspend when closing lid Double checked with dconf-editor, everything set to suspend So, from here on I don't know how to proceed.. what are common problems that cause this error? EDIT: My computer model is: Samsung Series 5 14" NP530U4BI $ sudo lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 00:1a.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 00:1c.3 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 00:1c.4 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 00:1d.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 00:1f.0 ISA bridge [0601]: Intel Corporation HM65 Express Chipset Family LPC Controller [8086:1c49] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 04) 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] 02:00.0 Network controller [0280]: Intel Corporation Centrino Advanced-N 6230 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller 04:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller

    Read the article

  • Monitor the Weather from Your Windows 7 Taskbar

    - by Asian Angel
    Keeping up with the weather forecast can be hard when you are extra busy with work. If you need a simple but nice looking way to integrate weather monitoring into your Taskbar then join us as we look at WeatherBar. Setting Up & Using WeatherBar To get started unzip the following files, place them in an appropriate “Program Files Folder”, and create a shortcut. When you start WeatherBar for the first time you will be presented with the following window and a random/default location. To get WeatherBar set up for your location there are only two settings to adjust (using the “Pencil & Gear Buttons”). Clicking on the “Pencil Button” will open up this small window…enter the name of your location and click “OK”. Next click on the “Gear Button” where you can choose the “Update Interval” and “Measurement Format” that best suits your needs. Click “OK” when finished and WeatherBar will be ready to go. That definitely looks nice. When you are finished viewing this window minimize it to the “Taskbar Icon” instead of clicking on the “Close Button”…otherwise the entire app will close. Left click on the “Taskbar Icon” to bring the window back up… Hovering the mouse over the “Taskbar Icon” provides a nice thumbnail of the weather forecast. Right clicking on the “Taskbar Icon” will display a nice mini forecast. Conclusion While WeatherBar may not be for everyone it does provide a nice easy way to monitor the weather from your “Taskbar” without taking up a lot of room. Links Download WeatherBar Similar Articles Productive Geek Tips Monitor the Weather for Your Location in ChromeCheck Weather Conditions in Real-time with Weather WatcherMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersTaskbar Eliminator Does What the Name Implies: Hides Your Windows TaskbarBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • EBS – ATG Webcast 9/11 - 9/12

    - by cwarticki
    EBS – ATG Webcast in September 2012 EBS – Multiple Language Support (MLS) Agenda :EBS is MLS Ready                                                                                 NLS / MLS Basic ArchitectureNLS / MLS InstallationNLS / MLS Configuration Settings                                                                    TroubleshootingQuestion and AnswersEMEA Session : September 11, 2012 at 09:00 UK / 10:00 CET / 13:30 India / 17:00 Japan / 18:00 Sydney (Australia) Details & Registration : Note 1480084.1 Direct link to register in WebEx US Session : September 12, 2012 at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern ·      Details & Registration : Note 1480085.1 ·      Direct link to register in WebEx ·         Schedules, recordings and the Presentations of the Advisor Webcast drove under the EBS Applications Technology area can be found in Note 1186338.1. ·         Current Schedules of Advisor Webcast for all Oracle Products can be found on Note 740966.1 ·         Post Presentation Recordings of the Advisor Webcasts for all Oracle Products can be found on Note 740964.1 If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • Grub rescue - error: unknown filesystem

    - by user53817
    I have a multiboot system set up. The system has three drives. Multiboot is configured with Windows XP, Windows 7, and Ubuntu - all on the first drive. I had a lot of unpartitioned space left on the drive and was reserving it for adding other OSes and for storing files there in the future. One day I went ahead and downloaded Partition Wizard and created a logical NTFS partition from within Windows 7, still some unpartitioned space left over. Everything worked fine, until I rebooted the computer a few days later. Now I'm getting: error: unknown filesystem. grub rescue First of all I was surprised not to find any kind of help command, by trying: help, ?, man, --help, -h, bash, cmd, etc. Now I'm stuck with non-bootable system. I have started researching the issue and finding that people usually recommend to boot to a Live CD and fix the issue from there. Is there a way to fix this issue from within grub rescue without the need for Live CD? UPDATE By following the steps from persist commands typed to grub rescue, I was able to boot to initramfs prompt. But not anywhere further than that. So far from reading the manual on grub rescue, I was able to see my drives and partitions using ls command. For the first hard drive I see the following: (hd0) (hd0,msdos6) (hd0,msdos5) (hd0,msdos2) (hd0,msdos1) I now know that (hd0,msdos6) contains Linux on it, since ls (hd0,msdos6)/ lists directories. Others will give "error: unknown filesystem." UPDATE 2 After the following commands I am now getting to the boot menu and can boot into Windows 7 and Ubuntu, but upon reboot I have to repeat these steps. ls ls (hd0,msdos6)/ set root=(hd0,msdos6) ls / set prefix=(hd0,msdos6)/boot/grub insmod /boot/grub/linux.mod normal UPDATE 3 Thanks Shashank Singh, with your instructions I have simplified my steps to the following. I have learned from you that I can replace msdos6 with just a 6 and that I can just do insmod normal instead of insmod /boot/grub/linux.mod. Now I just need to figure out how to save this settings from within grub itself, without booting into any OS. set root=(hd0,6) set prefix=(hd0,6)/boot/grub insmod normal normal UPDATE 4 Well, it seems like it is a requirement to boot into Linux. After booting into Ubuntu I have performed the following steps described in the manual: sudo update-grub udo grub-install /dev/sda This did not resolve the issue. I still get the grub rescue prompt. What do I need to do to permanently fix it? I have also learned that drive numbers as in hd0 need to be translated to drive letters as in /dev/sda for some commands. hd1 would be sdb, hd2 would be sdc, and so on. Partitions listed in grub as (hd0,msdos6) would be translated to /dev/sda6.

    Read the article

  • Using Network load balancing to distribute load for SharePoint2010 – Part3 of building my own development SharePoint2010 Farm

    - by ybbest
    Part1 of building my own development SharePoint2010 Farm Part2 of building my own development SharePoint2010 Farm Part3 of building my own development SharePoint2010 Farm In my last post, I have installed SharePoint2010 in one of the server (WFE One) and configured using the OOB SharePoint configuration wizard. In this post I will show you how to use OOB windows network load balancing to distribute load for SharePoint2010 site. 1. Install SharePoint in another server WFE Two (you can follow the steps in my last post), but instead of choosing create new Farm, you need to select “connect to existing farm” this time. 2. Click next then click retrieve database names button and select the farm configuration database. 3. Click next and enter the passphrase you specified when you first installed the SharePoint Farm. 4. Click the advanced settings and select Use this machine to host the web site. 5. Click OK to finish the configurations 6. Next, Install NLB in the two WFE (web front end) SharePoint servers 7. Configure NLB to create the cluster. Go to Start—Administrative Tools—Network Load Balancing Manager 8. Right-click the Network Load Balancing Clusters Node and select New Cluster. 9. Type in the host name that is to be part of the new cluster. 10. Type in the IP address for the cluster. 11. Select the Multicast for this cluster.(The default one is Unicast) 12. You can configure the Port Rules for the clustering , but I will leave the default here. 13. Add another WEF to the cluster. 14. Type in the host name that is to be part of the new cluster. 15. Set the Priority to 2. 16. Click Next to complete the cluster setup. 17. Create an entry in the DNS for the new cluster. 18. Add the binding to the IIS site in the IIS Manager 19. Change the Alternate access mapping for you default site collection from http://sp2010wefone to http://team 20. Browse to http://Team , you will be redirected to the SharePoint site.

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >