Search Results

Search found 7799 results on 312 pages for 'changing'.

Page 66/312 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Using 2D sprites and 3D models together

    - by Sweta Dwivedi
    I have gone through a few posts that talks about changing the GraphicsDevice.BlendState and GraphicsDevice.DepthStencilState (SpriteBatch & Render states). . however even after changing the states .. i cant see my 3D model on the screen.. I see the model for a second before i draw my video in the background. . Here is the code: case GameState.InGame: GraphicsDevice.Clear(Color.AliceBlue); spriteBatch.Begin(); if (player.State != MediaState.Stopped) { videoTexture = player.GetTexture(); } Rectangle screen = new Rectangle(GraphicsDevice.Viewport.X, GraphicsDevice.Viewport.Y, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height); // Draw the video, if we have a texture to draw. if (videoTexture != null) { spriteBatch.Draw(videoTexture, screen, Color.White); if (Selected_underwater == true) { spriteBatch.DrawString(font, "MaxX , MaxY" + maxWidth + "," + maxHeight, new Vector2(400, 10), Color.Red); spriteBatch.Draw(kinectRGBVideo, new Rectangle(0, 0, 100, 100), Color.White); spriteBatch.Draw(butterfly, handPosition, Color.White); foreach (AnimatedSprite a in aSprites) { a.Draw(spriteBatch); } } if(Selected_planet == true) { spriteBatch.Draw(kinectRGBVideo, new Rectangle(0, 0, 100, 100), Color.White); spriteBatch.Draw(butterfly, handPosition, Color.White); spriteBatch.Draw(videoTexture,screen,Color.White); GraphicsDevice.BlendState = BlendState.Opaque; GraphicsDevice.DepthStencilState = DepthStencilState.Default; GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap; foreach (_3DModel m in Solar) { m.DrawModel(); } } spriteBatch.End(); break;

    Read the article

  • Ubuntu impossible to install: "unable to find a medium containing a live filesystem"

    - by Lorenzo
    Yes, I have read questions and answers with similar titles for this issue, which prevented me from installing Ubuntu for several MONTHS now, trying to figure it out. I have a MacBookPro with triple partition (one for Mac Snow Leopard, one for Windows 7, one for Linux) created with reFIT firmware (not BootCamp). I set up the system according to these instructions for reFIT: http://lifehacker.com/5531037/how-to-triple+boot-your-mac-with-windows-and-linux-no-boot-camp-required Now. There is a free partition ready to accept Linux into its arms, but Linux does not want to participate. Most answers to the issue "unable to find a medium containing a live filesystem" point to changing the BIOS booting system (which I don't know how to do, especially using this reFIT booting system), and to changing the socket of the USB (which does not concern me, since I am using a CD, actually I tried with a CD, then with a DVD, since a blank CD is only 700MB while the iso image file of Ubuntu is about 731MB). Anyway. This is what happens: I am in the Mac system (using the Mac partition) I insert the DVD with the burned image of Ubuntu (yes I have tried burning it again and agin on both CD and DVD blank discs). I restart the computer. When reFIT loads, I hold down the ALT key until the CD image appears. I select it, and hit Enter. A small Ubuntu icon appears at the bottom of the screen. Then a Ubuntu sign appears in the middle of the screen with small dots underneath, lighting up progressively over and over to indicate it's loading. Then everything turns black and the following message appears, at the end of a few lines of text: "unable to find medium with live file system". Please provide very practical suggestions on what to do to an unexperienced wannabe Ubuntu very patient user. Please start by saying how do I access the BIOS setup from reFIT bootup, and exactly what and why I need to try and change. (Will this mess up my reFIT bootup?) And anything else I need to do to finally be able to install Ubuntu. Thanks

    Read the article

  • RAM caching causes severe performance drops

    - by B T
    I have read plenty of threads on memory caching and the standard response of "large cache is good, it shouldn't effect performance", "the kernel knows best". I have recently upgraded from 12.04 to 12.10 and changed from VirtualBox to VMware Workstation and the performance differences are severe (I suspect it is because of the latter). When I am running my virtual machine the system load monitor graph shows less than 50% memory usage generally. System load indicator is showing me that the rest of my RAM is used in the cache all the time. Plain and simple this is the comparison: BEFORE Cache was very sparingly used, pretty much none of my memory usage was the cache Swappiness was 0 (caused my memory to be used first, then swap only if needed) Performance was quite good and logical RAM was used fully first, caching was minimal. I could run enough software to utilize my full 4GB of RAM without any performance degradation whatsoever Swap space was then used as needed which was obviously slower (I am on a HDD) but was still usable when the current program was loaded into memory AFTER Cache is used to fill the full 4GB as soon as my virtual machine is run Swappiness is 0 (same behaviour as before but cache uses full memory straight away) Performance is terrible and unusable while running Ubuntu software Basic things like changing windows takes 2 minutes + Changing screens happens frame by frame over sometimes up to 5 minutes Cannot run an IDE and VM like I could with ease before So basically, any suggestions on how to take my performance back to how it was before while keeping my current setup? My suspicion is VMWare is the problem, but how do I see what is tied to the use of the cache? Surely there is a way to control this behaviour in software as polished as VMware? Thanks EDIT: Could also be important to note that the behaviour differs depending on whether VMware is open or closed. If VMware is open, then the ram will lock at like 50% and 50% cache and go into the complete lock up mentioned above. Contrastingly, if VMware is closed (after being open), then the RAM will continue to rise as it needs / cache will stay as the complete remaining memory and there is no noticeable performance degradation.

    Read the article

  • Reproducible freezes with on an AMD fusion (e350) sony vaio

    - by doycho
    So a week ago I bought it and I've been struggling to make the Ubuntu which I installed stable. There's one thing that makes my life miserable, though. There's this easily reproducible freeze when I start some kind of video. So here is what happens: Everything works fine for some time I start vlc/mplayer/flashplayer/totem with something to watch In few minutes time I lose the sound (nothing in the logs at this point) At that time the video app instantly allocates all the memory and its CPU usage skyrockets. Total freeze. I can move the cursor around for few seconds and sometimes even switch to another app. But ultimately there comes the time I can't do anything - can't kill X with ctrl+alt+backspace (I have it enabled), can't switch to any other console (ctrl+alt+f1-6), can't connect to the machine via ssh. The only way to restart it is the ctrl+alt+SysRq+UABI magic :) What discourages me most is the fact I can't see anything in the logs. The only error I've noticed is Jun 19 17:00:37 serenity kernel: [ 1506.350676] software-center[17581]: segfault at 30 ip 00007fd3631b814c sp 00007fff18a6fa10 error 4 in libgtk-x11-2.0.so.0.2400.4[7fd362f7d000+436000]. I've been searching through the Xorg log, kernel logs, syslog. If you have any idea how I can get more debug info I'll be glad to try them. Things I've tried: Changing drivers - the open source one, the proprietary driver xorg-edgers' ppa - https://launchpad.net/~xorg-edgers/+archive/ppa changing to the last stable kernel (2.6.39) Some notes: It my be irrelevant but the sound is constantly stuttering. This probably is a separate issue though I've found that if I start more video/sound apps the freeze happens faster.

    Read the article

  • Booting Ubuntu 12.04 in Unity the Windows Theme & Icon Theme reversed back to and always locked to Ubuntu default

    - by Antonio
    I did set up last year Faience-Ocre as my Icon Theme and Adwaita Cupertino L Unity as my Windows Theme (the GTK+ thme was unchanged to Adwaita (default)). It has worked perfectly well until 3 days ago when starting my PC I saw the Ubuntu default showing up for Windows & Icon Theme. I noticed that at start-up the disk access LED is not lit up continuously as before but at moment stops reading for a few seconds (up to 15 s) then complete the disk reading process. When all was working well this LED would lit up continuously. Another thing is that GNOME applications are as well not working well as previously Nautilus and Gedit now don't use the global menu in the system bar but a local window menu. Nautilus - Nemo before the incident Nautilus - Nemo now ... I did open dconf to check the desktop settings in org-gnome-desktop-wm-preferences and everything is looking good. When I change the settings in the app Advanced Settings in the Theme folder I see the respective value changing in dconf. However, there is no change on my desktop. It looks like it's crippled and GNOME related. [Update 1]: I have the same defect as referenced @ ubuntu theme suddenly changed to default and it's not coming back! instead of my GTK theme I get a classic, Windows-95-like grayish theme ... However one of the solution mentionned: http://www.webupd8.org/2011/06/fix-ubuntu-linux-mint-theme-changing-to.html is not working at all, even for 20 s up to 60 s delay.

    Read the article

  • How do I start Ubuntu without X server?

    - by Kaare Mikkelsen
    So, I'm trying to install the official nVidia drivers for my fancy graphics card, and they advice disabling the X server before installing, as well as making sure that I can boot without the X server, so as not to wreck anything. However, I seem to be doing something wrong. As I understand it, this should be as simple as changing the runlevel from 2 to 1? (I am aware that all this may simply be me not understanding runlevels) If that is correct, a quick test should be simply typing "sudo init 1" or "sudo telinit 1" in a terminal? Doing that makes the system attempt to shutdown, only it stops at the purple screen with the ubuntu logo and 5 white dots underneath. I haven't observed it get anywhere from there, I always end up holding down the power button. "sudo telinit 3" has not visible effect. Alternatively, I should be able to get there using the recovery mode, activated through the grub menu? I have very little success with that. After picking recovery mode, I am faced with a set of options about how to proceed. Both choosing the one with "network enabled" and "text only", I get a dialog explaining that this will mount my / file system in read/write mode, and whether this is what I want. I choose yes, and it seems to report that my drive is fine (there's a single line of text detailing the state of the partition). And then it stops. I haven't tried letting it sit for more than a few minutes, but presumably this process should be comparable in duration to a regular boot? I am not particularly fond of messing with any .conf-files until I am certain that I can handle things with training wheels on. So, I guess there are two questions: the one in the title, and "how do I start a text-only session without changing defaults?" Thanks in advance :)

    Read the article

  • Fun Upgrading to .Net 4.0

    - by Sam Abraham
    We are currently in the process of upgrading one of our applications to .Net 4.0. Aside from us geeks wanting to always use latest and greatest technologies, an immediate business need for Silverlight 4.0 features justified our upgrade endeavor. The following is a summary of some issues we ran into with our web project:   For security purposes, the IIS 7 .Net 4.0 ISAPI filter is disabled. “Allow” it from the ISAPI and CGI Restrictions screen as shown:   Figure 1 - Allowing ASP.Net 4.0 ISAPI Filter   By default the Web Setup Project only requires the .Net Framework 4 Client Profile to be installed on target system, which offers a lighter weight install for client machines consuming .Net 4.0 applications. However, using certain .Net 4.0 features requires the full .Net 4.0 Framework as outlined in this link: http://msdn.microsoft.com/en-us/library/cc656912.aspx. We hence needed to update the installer to require the complete .Net 4.0 Framework on the target machine and to prompt for its installation if needed.   To accomplish this goal, we updated the installer’s launch conditions to check for .Net 4.0 as well as the installer prerequisites as shown:     Figure 2- Ensure Web Setup Project runs on full .Net 4.0 version Figure 3 - Launch Conditions screen Figure 4 - Set launch condition to .Net 4.0. Figure 5 -Changing installer prerequisites Figure 6 -Changing installer prerequisites

    Read the article

  • How to do dependency Injection and conditional object creation based on type?

    - by Pradeep
    I have a service endpoint initialized using DI. It is of the following style. This end point is used across the app. public class CustomerService : ICustomerService { private IValidationService ValidationService { get; set; } private ICustomerRepository Repository { get; set; } public CustomerService(IValidationService validationService,ICustomerRepository repository) { ValidationService = validationService; Repository = repository; } public void Save(CustomerDTO customer) { if (ValidationService.Valid(customer)) Repository.Save(customer); } Now, With the changing requirements, there are going to be different types of customers (Legacy/Regular). The requirement is based on the type of the customer I have to validate and persist the customer in a different way (e.g. if Legacy customer persist to LegacyRepository). The wrong way to do this will be to break DI and do somthing like public void Save(CustomerDTO customer) { if(customer.Type == CustomerTypes.Legacy) { if (LegacyValidationService.Valid(customer)) LegacyRepository.Save(customer); } else { if (ValidationService.Valid(customer)) Repository.Save(customer); } } My options to me seems like DI all possible IValidationService and ICustomerRepository and switch based on type, which seems wrong. The other is to change the service signature to Save(IValidationService validation, ICustomerRepository repository, CustomerDTO customer) which is an invasive change. Break DI. Use the Strategy pattern approach for each type and do something like: validation= CustomerValidationServiceFactory.GetStratedgy(customer.Type); validation.Valid(customer) but now I have a static method which needs to know how to initialize different services. I am sure this is a very common problem, What is the right way to solve this without changing service signatures or breaking DI?

    Read the article

  • Video lags/freezes in SMPlayer and VLC

    - by RanRag
    When I try to play my video files in SMPlayer it works fine but as soon as I switch to fullscreen mode(16:9) following thing happens: 1) Video starts lagging. 2) Audio and video goes out of sync. 3) CPU usage rises to ~50%. 4) SMPlayer starts to hang. My current SMPlayer configuration: 1)Video Output Driver = x11(slow) 2)Audio Output Driver = alsa(0.0-HDA Intel) 3)Cache = 8192 KB 4)Threads for decoding(MPEG-1/2 and H.264 only = 2 Things I tried solve this problem: 1) Tried changing video o/p driver to xv,gl. 2) Tried changing audio o/p driver to pulse. 3) Tried increasing cache size and also tried using nocache. Everything works fine on windows but I don't want to switch to windows just to play video files. My system config: Acer Aspire One D270 Atom N2600(Cedar Trail) 1.6GHz 2GB Memory Intel GMA 3600 graphics. Ubuntu 12.04 Kernel Release: 3.2.0-23-generic-pae Rest all things are working fine I have no resolution issue, bluetooth, wireless also working fine. Just ask me to submit any other log file I will be happy to post. SMPlayer log MPlayer Terminal output Codec Information(currently playing file):

    Read the article

  • Sorting objects before rendering

    - by dreta
    I'm trying to implement a scene graph and in all the articles i've come across there is talk about object sorting. So you'd sort your objects by "material" for example. Now untill i sat down and started implementing it, i kind of took this for granted, because it made sense. But now i'm wondering what does sorting actually change? In my engine, i have a manager for UBOs, i use those to store data that'll be shared between programs, at the moment that only involves time, camera and projection matrices and lights (i'm not worrying about managing which lights affect which objects ATM). Now for each model i have to change the model to world matrix uniform, no sorting is going to change that. So is the jump from changing this matrix to also setting a material for each object that bad? I vaguely remember reading somewhere that each time you change something in the pipeline, it has to get flushed and that can cause performance issues. But for each drawing call i'm setting up a model to world matrix anyway, so what sense does it make to ever be concerned about this? BTW is there any information about whether changing a uniform and calling glBufferSubData is more (or less) expensive.

    Read the article

  • Why does video playback lag/freeze when I go into full-screen mode?

    - by RanRag
    When I try to play my video files in SMPlayer it works fine but as soon as I switch to fullscreen mode(16:9) following thing happens: 1) Video starts lagging. 2) Audio and video goes out of sync. 3) CPU usage rises to ~50%. 4) SMPlayer starts to hang. My current SMPlayer configuration: 1)Video Output Driver = x11(slow) 2)Audio Output Driver = alsa(0.0-HDA Intel) 3)Cache = 8192 KB 4)Threads for decoding(MPEG-1/2 and H.264 only = 2 Things I tried solve this problem: 1) Tried changing video o/p driver to xv,gl. 2) Tried changing audio o/p driver to pulse. 3) Tried increasing cache size and also tried using nocache. Everything works fine on windows but I don't want to switch to windows just to play video files. My system config: Acer Aspire One D270 Atom N2600(Cedar Trail) 1.6GHz 2GB Memory Intel GMA 3600 graphics. Ubuntu 12.04 Kernel Release: 3.2.0-23-generic-pae Rest all things are working fine I have no resolution issue, bluetooth, wireless also working fine. Just ask me to submit any other log file I will be happy to post. SMPlayer log MPlayer Terminal output Codec Information(currently playing file):

    Read the article

  • What are some good seminar topics that can be used to improve designer&developer communication?

    - by tactoth
    Hello guys the thing I'll tell is what happens in the company I work for but I know it's more like a common issue in software companies. I'm development team leader in a internet service company that provides service that's very similar to dropbox. In our company we have mainly two divisions: the tech division and the designers division, both have their own reporting hierarchy. Designers focus on designing UI and prioritizing features, while developers focus on implement designers' ideas (more like being driven as our big boss has said). Then here comes our issue: the DEV team and DES team communicate very bad. DEV complain DES for these reasons: Too frequent changing of requirements Too complicated interaction (our DEV team has actually learned many HCI principles) Documents for design are incomplete, usually you just get 'design principles' and it's up to DEV to complete design details. When you find design defects, you ask DES team to resolve them, then DES team quickly change the principles and you gonna spend another several weeks because the change is so fundamental. While DES complain DEV for these reasons: Code architecture is not good enough to adapt to changing requirements (Obviously DES knows something about software development) Product design is about principles, not details. DEV fails to realize this. Communication should be quick and should be mainly oral. Trying to make most feature discussion in document for reference is too overloaded and doesn't make sense. As you can see, DEV and DES have different ideas on product design, and encourages very different practice. We have this difference because of the way we work. So our solution is that we should plan some seminars to make each part more aware of the way the other part work. Then my question is, what are some good topics for such seminars? Guessing some people may not think seminars can solve this problem, please also suggest your solution.

    Read the article

  • In virtualbox, I can't access the dvd drive to install a guest host

    - by user211062
    I have installed a fresh copy of Ubuntu Server 12.04 and VirtualBox 4.3. I have set up a VM called "MediaServer" and tried to start it. I then get the following error: Cannot open host device '/dev/sr0' for readonly access. Check the permissions of that device ('/bin/ls -l /dev/sr0'): Most probably you need to be member of the device group. Make sure that you logout/login after changing the group settings of the current user (VERR_ACCESS_DENIED) I have looked all over the Internet and have been unable to find a solution. Using Webmin, I tried changing the group settings so that my user name was in the "vboxusers" group, but that did not work either. I tried various other changes in group settings and none of them worked. Also, I tried rebooting the server after the changes and that didn't work either. I have been following a guide on how to set up an Ubuntu server from the website "linuxhomeserverguide.com" and when it came to the section where you could finally set up your first virtual machine, I am stumped. I would really appreciate it if someone could help me. Thanks in advance.

    Read the article

  • Can't change folder background

    - by newcomer
    I tried to change via dragging from the Backgrounds and Emblems window, but the icon just goes back to that window rather than changing the folder background.However, I can change the task bar by this drag-n-drop. Probably it is something about changing ownership permission? if so how to change that? In /home/mashruf/.gconf/apps/nautilus/preferences/%gconf.xml file it says:, Should I change this file? how? <?xml version="1.0"?> <gconf> <entry name="click_policy" mtime="1297597800" type="string"> <stringvalue>single</stringvalue> </entry> <entry name="default_folder_viewer" mtime="1297597336" type="string"> <stringvalue>list_view</stringvalue> </entry> <entry name="media_autorun_x_content_open_folder" mtime="1297534321" type="list" ltype="string"> </entry> <entry name="media_autorun_x_content_ignore" mtime="1297534321" type="list" ltype="string"> </entry> <entry name="media_autorun_x_content_start_app" mtime="1297534321" type="list" ltype="string"> <li type="string"> <stringvalue>x-content/software</stringvalue> </li> </entry> <entry name="start_with_location_bar" mtime="1297300028" type="bool" value="true"/> <entry name="side_pane_view" mtime="1297269334" type="string"> <stringvalue>NautilusTreeSidebar</stringvalue> </entry> <entry name="navigation_window_saved_maximized" mtime="1297600306" type="bool" value="false"/> <entry name="navigation_window_saved_geometry" mtime="1297600306" type="string"> <stringvalue>964x608+59+2</stringvalue> </entry> <entry name="sidebar_width" mtime="1297390418" type="int" value="192"/> </gconf>

    Read the article

  • The (non) Importance of Language

    - by Eric A. Stephens
    Working with a variety of clients on EA initiatives one begins to realize that not everyone is a fan of EA. Specifically, they are not a fan of the "a-word". Some organizations have abused this term with creating and assigning the title to just about anyone who demonstrates above average prowess with a particular technology. Other organizations will assign the title to those managers left with no staff after a reorg. Some companies, unfortunately, have simply had a bad go of it with regard to EA...or any "A" for that matter. What we call "EA" is almost irrelevant. But what is not negotiable for those to succeed in business is to manage change. That is what EA is all about. I recall sitting in Zachman training led by himself. He posits the only organizations that don't need EA (or whatever you want to call it) are those that are not changing. My experience suggests those orgs that aren't changing aren't growing. And if you aren't growing, you're dying. Any EA program will not succeed unless there is a desire to change. No desire to change suggests the EA/Advisor/Change Agent should just walk the other way.

    Read the article

  • Can't login to Unity 3d after enabling Xinerama for a short moment

    - by Amir Adar
    Today I connected a second monitor to my computer. I set it up using nVidia's control panel, and all was working quite well, so I figured it won't be a problem to try Xinerama, just to see the difference between that and twinview. After enabling Xinerama and restarting the X session, I saw that I was logged into a Unity 2d session. I thought it was a problem with Xinerama, so I switched back to twinview, but it still logged me into Unity 2d. I tried disconnecting the second monitor, no luck: still Unity 2d. I tried changing GPU drivers and installing drivers from a separate ppa, and still I was logged into Unity 2d. Up until this point, I didn't have any problem logging into Unity 3d. It only happened after I tried using Xinerama. I should note that I was doing all this while updates were going on in the background, so it could be something related to that, though I can't imagine what (I tried booting with another kernel, but no luck). So what exactly happened? Did changing the mode to Xinerama triggered some other changes that I'm not aware of? Did these updates cause a certain malfunction in the driver? Is it something else?

    Read the article

  • Basic Puppet installation with Solaris 11.2 beta

    - by user13366125
    At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry. However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish. Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately. (more)

    Read the article

  • Should I build a multi-threaded system that handles events from a game and sorts them, independently, into different threads based on priority?

    - by JonathonG
    Can I build a multi-threaded system that handles events from a game and sorts them, independently, into different threads based on priority, and is it a good idea? Here's more info: I am about to begin work on porting a mid-sized game from Flash/AS3 to Java so that I can continue development with multi-threading capabilities. Here's a small bit of background about the game: The game contains numerous asynchronous activities, such as "world updating" (the game environment is constantly changing based on a set of natural laws and forces), procedural generation of terrain, NPCs, quests, items, etc., and on top of that, the effects of all of the player's interactions with his environment are programmatically calculated in real time, based on a set of constantly changing "stats" and once again, natural laws and forces. All of these things going on at once, in an asynchronous manner, seem to lend themselves to multi-threading very well. My question is: Can I build some kind of central engine that handles the "stacking" of all of these events as they are triggered, and dynamically sorts them out amongst the available threads, and would it be a good idea? As an example: Essentially, every time something happens (IE, a magic missile being generated by a spell, or a bunch of plants need to grow to their next stage), instead of just processing that task right then and adding the new object(s) to a list of managed objects, send a reference to that event to a core "event handler" that throws it into a stack of all other currently queued events, which then sorts them out and orders them according to urgency, splits them between a number of available threads for as-fast-as-possible multithreaded execution.

    Read the article

  • Why is text on my TV monitor all grainy?

    - by searchfgold6789
    I have successfully installed the Catalyst drivers and hooked up my 1080p TV to the Radeon HD 5750's HDMI port and now it is working. There only seems to be one issue - most images appear OK, at least to my eyes, but text is grainy and difficult to read (not sure how well it can be seen in the picture): It works fine with a smaller screen, and hopefully you can see it with the new pictures which are only slightly better, but images are rendered a little funny too. Changing antialiasing does nothing and changing the refresh rate only makes a slight change if any. The small monitor I have works fine. Smaller resolutions such as 1024x768 look OK but are the wrong aspect ratio. I have tried renaming the input mode on the TV to "PC", and that helped... a little bit. Also, I tried a different PC with the same result. Messing around extensively with the ATI Catalyst Control Center also did not help. I am wondering if there is anything else I can try to increase the quality so it is more usable.

    Read the article

  • Slight stuttering when moving windows in fresh 12.04 install

    - by Konsolkongen
    Installed Ubuntu 12.04 today and my problem is that when I'm moving the windows around my screen it doesn't feel smooth at all. Usually I can fix this by changing the refresh rate to 60Hz, but this time it doesn't help. My graphics card is a Nvidia GTX 560Ti and I've tried both the 295.40, 295.45 and 304.43 (which I'm currently using) but neither has resolved my problem. I searched around a bit and tried changing the refresh-rate using compizconfig-settings-manager and xrandr. No change using CCSM, but when I tried xrandr I got this reply: konsolkongen@konsolkongen-desktop:~$ xrandr -r 60Rate 60.0 Hz not available for this size - which is nonsense of course. This is what my xorg.conf file looks like: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@allspice) Fri Mar 30 15:25:24 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Samsung SyncMaster" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 560 Ti" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: 1680x1050_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Any help would be greatly appreciated, my obsession with video quality can't stand stuttering like this. For what it's worth though, I don't have any screen tearing, so at least V-sync is on. Thanks.

    Read the article

  • Ubuntu installer shows three small screens

    - by Sylan
    I'm trying to install Ubuntu (or Kubuntu, which I tried first) 13.04 on a new laptop. I need the install in UEFI mode in order to properly dual boot with Windows 8. I've managed to overcome most of the UEFI issues up to the installer, which appeared as a black screen until I used nomodeset. Now the installer will appear, however it does fit my screen size. Instead, it appears as three small identical screens across the top of the monitor. I thought the problem could be solved by changing the display resolution in GRUB via changing the vga number, but this simply expanded the width of the three screens. While I could install it at this point, the identical screens are too small for me to be able to read the installer. As well, the "Try Ubuntu" option simply gets stuck at a black screen. I'm afraid these problems may persist through the installation if I attempted to continue. Additional information: The laptop is a Lenovo Ideapad Y580 with an i7 3630qm processor, and a GeForce GTX 660m graphics card which works alongside an Intel HD 4000 integrated card via Optimus.

    Read the article

  • 12.04 LTS Apache2 writing files from webpage at runtime has no effect - possible read/write permissions?

    - by J Green
    I'm running 12.04LTS with Apache and Mono in VirtualBox, with the goal of hosting a web app (coded in ASP.NET and C#) on my local network. The scripts on the page are able to successfully read from text files in the same directory as my site (/var/www/mysite/) but do not seem to be able to write. I'm sure the code works, because it did with my testing in Visual Web Developer on Windows. I don't get any errors, but when I click the button on the loaded webpage, the text file in question does not change. I'm fairly new to Linux in general, so I'm not too familiar with how to set permissions properly, and it may be a permissions issue. Unfortunately, I have searched all over the internet and haven't found a solution that worked, but I've tried (perhaps incorrectly) changing the owner of the files in question to www-root, changing the mode to a+rw, but sadly to no avail. I have tried everything here but it doesn't work: Whats the simplest way to edit and add files to "/var/www"? I hope someone can help me out.

    Read the article

  • AngularJS on top of ASP.NET: Moving the MVC framework out to the browser

    - by Varun Chatterji
    Heavily drawing inspiration from Ruby on Rails, MVC4’s convention over configuration model of development soon became the Holy Grail of .NET web development. The MVC model brought with it the goodness of proper separation of concerns between business logic, data, and the presentation logic. However, the MVC paradigm, was still one in which server side .NET code could be mixed with presentation code. The Razor templating engine, though cleaner than its predecessors, still encouraged and allowed you to mix .NET server side code with presentation logic. Thus, for example, if the developer required a certain <div> tag to be shown if a particular variable ShowDiv was true in the View’s model, the code could look like the following: Fig 1: To show a div or not. Server side .NET code is used in the View Mixing .NET code with HTML in views can soon get very messy. Wouldn’t it be nice if the presentation layer (HTML) could be pure HTML? Also, in the ASP.NET MVC model, some of the business logic invariably resides in the controller. It is tempting to use an anti­pattern like the one shown above to control whether a div should be shown or not. However, best practice would indicate that the Controller should not be aware of the div. The ShowDiv variable in the model should not exist. A controller should ideally, only be used to do the plumbing of getting the data populated in the model and nothing else. The view (ideally pure HTML) should render the presentation layer based on the model. In this article we will see how Angular JS, a new JavaScript framework by Google can be used effectively to build web applications where: 1. Views are pure HTML 2. Controllers (in the server sense) are pure REST based API calls 3. The presentation layer is loaded as needed from partial HTML only files. What is MVVM? MVVM short for Model View View Model is a new paradigm in web development. In this paradigm, the Model and View stuff exists on the client side through javascript instead of being processed on the server through postbacks. These frameworks are JavaScript frameworks that facilitate the clear separation of the “frontend” or the data rendering logic from the “backend” which is typically just a REST based API that loads and processes data through a resource model. The frameworks are called MVVM as a change to the Model (through javascript) gets reflected in the view immediately i.e. Model > View. Also, a change on the view (through manual input) gets reflected in the model immediately i.e. View > Model. The following figure shows this conceptually (comments are shown in red): Fig 2: Demonstration of MVVM in action In Fig 2, two text boxes are bound to the same variable model.myInt. Thus, changing the view manually (changing one text box through keyboard input) also changes the other textbox in real time demonstrating V > M property of a MVVM framework. Furthermore, clicking the button adds 1 to the value of model.myInt thus changing the model through JavaScript. This immediately updates the view (the value in the two textboxes) thus demonstrating the M > V property of a MVVM framework. Thus we see that the model in a MVVM JavaScript framework can be regarded as “the single source of truth“. This is an important concept. Angular is one such MVVM framework. We shall use it to build a simple app that sends SMS messages to a particular number. Application, Routes, Views, Controllers, Scope and Models Angular can be used in many ways to construct web applications. For this article, we shall only focus on building Single Page Applications (SPAs). Many of the approaches we will follow in this article have alternatives. It is beyond the scope of this article to explain every nuance in detail but we shall try to touch upon the basic concepts and end up with a working application that can be used to send SMS messages using Sent.ly Plus (a service that is itself built using Angular). Before you read on, we would like to urge you to forget what you know about Models, Views, Controllers and Routes in the ASP.NET MVC4 framework. All these words have different meanings in the Angular world. Whenever these words are used in this article, they will refer to Angular concepts and not ASP.NET MVC4 concepts. The following figure shows the skeleton of the root page of an SPA: Fig 3: The skeleton of a SPA The skeleton of the application is based on the Bootstrap starter template which can be found at: http://getbootstrap.com/examples/starter­template/ Apart from loading the Angular, jQuery and Bootstrap JavaScript libraries, it also loads our custom scripts /app/js/controllers.js /app/js/app.js These scripts define the routes, views and controllers which we shall come to in a moment. Application Notice that the body tag (Fig. 3) has an extra attribute: ng­app=”smsApp” Providing this tag “bootstraps” our single page application. It tells Angular to load a “module” called smsApp. This “module” is defined /app/js/app.js angular.module('smsApp', ['smsApp.controllers', function () {}]) Fig 4: The definition of our application module The line shows above, declares a module called smsApp. It also declares that this module “depends” on another module called “smsApp.controllers”. The smsApp.controllers module will contain all the controllers for our SPA. Routing and Views Notice that in the Navbar (in Fig 3) we have included two hyperlinks to: “#/app” “#/help” This is how Angular handles routing. Since the URLs start with “#”, they are actually just bookmarks (and not server side resources). However, our route definition (in /app/js/app.js) gives these URLs a special meaning within the Angular framework. angular.module('smsApp', ['smsApp.controllers', function () { }]) //Configure the routes .config(['$routeProvider', function ($routeProvider) { $routeProvider.when('/binding', { templateUrl: '/app/partials/bindingexample.html', controller: 'BindingController' }); }]); Fig 5: The definition of a route with an associated partial view and controller As we can see from the previous code sample, we are using the $routeProvider object in the configuration of our smsApp module. Notice how the code “asks for” the $routeProvider object by specifying it as a dependency in the [] braces and then defining a function that accepts it as a parameter. This is known as dependency injection. Please refer to the following link if you want to delve into this topic: http://docs.angularjs.org/guide/di What the above code snippet is doing is that it is telling Angular that when the URL is “#/binding”, then it should load the HTML snippet (“partial view”) found at /app/partials/bindingexample.html. Also, for this URL, Angular should load the controller called “BindingController”. We have also marked the div with the class “container” (in Fig 3) with the ng­view attribute. This attribute tells Angular that views (partial HTML pages) defined in the routes will be loaded within this div. You can see that the Angular JavaScript framework, unlike many other frameworks, works purely by extending HTML tags and attributes. It also allows you to extend HTML with your own tags and attributes (through directives) if you so desire, you can find out more about directives at the following URL: http://www.codeproject.com/Articles/607873/Extending­HTML­with­AngularJS­Directives Controllers and Models We have seen how we define what views and controllers should be loaded for a particular route. Let us now consider how controllers are defined. Our controllers are defined in the file /app/js/controllers.js. The following snippet shows the definition of the “BindingController” which is loaded when we hit the URL http://localhost:port/index.html#/binding (as we have defined in the route earlier as shown in Fig 5). Remember that we had defined that our application module “smsApp” depends on the “smsApp.controllers” module (see Fig 4). The code snippet below shows how the “BindingController” defined in the route shown in Fig 5 is defined in the module smsApp.controllers: angular.module('smsApp.controllers', [function () { }]) .controller('BindingController', ['$scope', function ($scope) { $scope.model = {}; $scope.model.myInt = 6; $scope.addOne = function () { $scope.model.myInt++; } }]); Fig 6: The definition of a controller in the “smsApp.controllers” module. The pieces are falling in place! Remember Fig.2? That was the code of a partial view that was loaded within the container div of the skeleton SPA shown in Fig 3. The route definition shown in Fig 5 also defined that the controller called “BindingController” (shown in Fig 6.) was loaded when we loaded the URL: http://localhost:22544/index.html#/binding The button in Fig 2 was marked with the attribute ng­click=”addOne()” which added 1 to the value of model.myInt. In Fig 6, we can see that this function is actually defined in the “BindingController”. Scope We can see from Fig 6, that in the definition of “BindingController”, we defined a dependency on $scope and then, as usual, defined a function which “asks for” $scope as per the dependency injection pattern. So what is $scope? Any guesses? As you might have guessed a scope is a particular “address space” where variables and functions may be defined. This has a similar meaning to scope in a programming language like C#. Model: The Scope is not the Model It is tempting to assign variables in the scope directly. For example, we could have defined myInt as $scope.myInt = 6 in Fig 6 instead of $scope.model.myInt = 6. The reason why this is a bad idea is that scope in hierarchical in Angular. Thus if we were to define a controller which was defined within the another controller (nested controllers), then the inner controller would inherit the scope of the parent controller. This inheritance would follow JavaScript prototypal inheritance. Let’s say the parent controller defined a variable through $scope.myInt = 6. The child controller would inherit the scope through java prototypical inheritance. This basically means that the child scope has a variable myInt that points to the parent scopes myInt variable. Now if we assigned the value of myInt in the parent, the child scope would be updated with the same value as the child scope’s myInt variable points to the parent scope’s myInt variable. However, if we were to assign the value of the myInt variable in the child scope, then the link of that variable to the parent scope would be broken as the variable myInt in the child scope now points to the value 6 and not to the parent scope’s myInt variable. But, if we defined a variable model in the parent scope, then the child scope will also have a variable model that points to the model variable in the parent scope. Updating the value of $scope.model.myInt in the parent scope would change the model variable in the child scope too as the variable is pointed to the model variable in the parent scope. Now changing the value of $scope.model.myInt in the child scope would ALSO change the value in the parent scope. This is because the model reference in the child scope is pointed to the scope variable in the parent. We did no new assignment to the model variable in the child scope. We only changed an attribute of the model variable. Since the model variable (in the child scope) points to the model variable in the parent scope, we have successfully changed the value of myInt in the parent scope. Thus the value of $scope.model.myInt in the parent scope becomes the “single source of truth“. This is a tricky concept, thus it is considered good practice to NOT use scope inheritance. More info on prototypal inheritance in Angular can be found in the “JavaScript Prototypal Inheritance” section at the following URL: https://github.com/angular/angular.js/wiki/Understanding­Scopes. Building It: An Angular JS application using a .NET Web API Backend Now that we have a perspective on the basic components of an MVVM application built using Angular, let’s build something useful. We will build an application that can be used to send out SMS messages to a given phone number. The following diagram describes the architecture of the application we are going to build: Fig 7: Broad application architecture We are going to add an HTML Partial to our project. This partial will contain the form fields that will accept the phone number and message that needs to be sent as an SMS. It will also display all the messages that have previously been sent. All the executable code that is run on the occurrence of events (button clicks etc.) in the view resides in the controller. The controller interacts with the ASP.NET WebAPI to get a history of SMS messages, add a message etc. through a REST based API. For the purposes of simplicity, we will use an in memory data structure for the purposes of creating this application. Thus, the tasks ahead of us are: Creating the REST WebApi with GET, PUT, POST, DELETE methods. Creating the SmsView.html partial Creating the SmsController controller with methods that are called from the SmsView.html partial Add a new route that loads the controller and the partial. 1. Creating the REST WebAPI This is a simple task that should be quite straightforward to any .NET developer. The following listing shows our ApiController: public class SmsMessage { public string to { get; set; } public string message { get; set; } } public class SmsResource : SmsMessage { public int smsId { get; set; } } public class SmsResourceController : ApiController { public static Dictionary<int, SmsResource> messages = new Dictionary<int, SmsResource>(); public static int currentId = 0; // GET api/<controller> public List<SmsResource> Get() { List<SmsResource> result = new List<SmsResource>(); foreach (int key in messages.Keys) { result.Add(messages[key]); } return result; } // GET api/<controller>/5 public SmsResource Get(int id) { if (messages.ContainsKey(id)) return messages[id]; return null; } // POST api/<controller> public List<SmsResource> Post([FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { SmsResource res = (SmsResource) value; res.smsId = currentId++; messages.Add(res.smsId, res); //SentlyPlusSmsSender.SendMessage(value.to, value.message); return Get(); } } // PUT api/<controller>/5 public List<SmsResource> Put(int id, [FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { if (messages.ContainsKey(id)) { //Update the message messages[id].message = value.message; messages[id].to = value.message; } return Get(); } } // DELETE api/<controller>/5 public List<SmsResource> Delete(int id) { if (messages.ContainsKey(id)) { messages.Remove(id); } return Get(); } } Once this class is defined, we should be able to access the WebAPI by a simple GET request using the browser: http://localhost:port/api/SmsResource Notice the commented line: //SentlyPlusSmsSender.SendMessage The SentlyPlusSmsSender class is defined in the attached solution. We have shown this line as commented as we want to explain the core Angular concepts. If you load the attached solution, this line is uncommented in the source and an actual SMS will be sent! By default, the API returns XML. For consumption of the API in Angular, we would like it to return JSON. To change the default to JSON, we make the following change to WebApiConfig.cs file located in the App_Start folder. public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter. SupportedMediaTypes. FirstOrDefault( t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); } } We now have our backend REST Api which we can consume from Angular! 2. Creating the SmsView.html partial This simple partial will define two fields: the destination phone number (international format starting with a +) and the message. These fields will be bound to model.phoneNumber and model.message. We will also add a button that we shall hook up to sendMessage() in the controller. A list of all previously sent messages (bound to model.allMessages) will also be displayed below the form input. The following code shows the code for the partial: <!--­­ If model.errorMessage is defined, then render the error div -­­> <div class="alert alert-­danger alert-­dismissable" style="margin­-top: 30px;" ng­-show="model.errorMessage != undefined"> <button type="button" class="close" data­dismiss="alert" aria­hidden="true">&times;</button> <strong>Error!</strong> <br /> {{ model.errorMessage }} </div> <!--­­ The input fields bound to the model --­­> <div class="well" style="margin-­top: 30px;"> <table style="width: 100%;"> <tr> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Phone number (eg; +44 7778 609466)" ng­-model="model.phoneNumber" class="form-­control" style="width: 90%" onkeypress="return checkPhoneInput();" /> </td> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Message" ng­-model="model.message" class="form-­control" style="width: 90%" /> </td> <td style="text-­align: center;"> <button class="btn btn-­danger" ng-­click="sendMessage();" ng-­disabled="model.isAjaxInProgress" style="margin­right: 5px;">Send</button> <img src="/Content/ajax-­loader.gif" ng­-show="model.isAjaxInProgress" /> </td> </tr> </table> </div> <!--­­ The past messages ­­--> <div style="margin-­top: 30px;"> <!­­-- The following div is shown if there are no past messages --­­> <div ng­-show="model.allMessages.length == 0"> No messages have been sent yet! </div> <!--­­ The following div is shown if there are some past messages --­­> <div ng-­show="model.allMessages.length == 0"> <table style="width: 100%;" class="table table-­striped"> <tr> <td>Phone Number</td> <td>Message</td> <td></td> </tr> <!--­­ The ng-­repeat directive is line the repeater control in .NET, but as you can see this partial is pure HTML which is much cleaner --> <tr ng-­repeat="message in model.allMessages"> <td>{{ message.to }}</td> <td>{{ message.message }}</td> <td> <button class="btn btn-­danger" ng-­click="delete(message.smsId);" ng­-disabled="model.isAjaxInProgress">Delete</button> </td> </tr> </table> </div> </div> The above code is commented and should be self explanatory. Conditional rendering is achieved through using the ng-­show=”condition” attribute on various div tags. Input fields are bound to the model and the send button is bound to the sendMessage() function in the controller as through the ng­click=”sendMessage()” attribute defined on the button tag. While AJAX calls are taking place, the controller sets model.isAjaxInProgress to true. Based on this variable, buttons are disabled through the ng-­disabled directive which is added as an attribute to the buttons. The ng-­repeat directive added as an attribute to the tr tag causes the table row to be rendered multiple times much like an ASP.NET repeater. 3. Creating the SmsController controller The penultimate piece of our application is the controller which responds to events from our view and interacts with our MVC4 REST WebAPI. The following listing shows the code we need to add to /app/js/controllers.js. Note that controller definitions can be chained. Also note that this controller “asks for” the $http service. The $http service is a simple way in Angular to do AJAX. So far we have only encountered modules, controllers, views and directives in Angular. The $http is new entity in Angular called a service. More information on Angular services can be found at the following URL: http://docs.angularjs.org/guide/dev_guide.services.understanding_services. .controller('SmsController', ['$scope', '$http', function ($scope, $http) { //We define the model $scope.model = {}; //We define the allMessages array in the model //that will contain all the messages sent so far $scope.model.allMessages = []; //The error if any $scope.model.errorMessage = undefined; //We initially load data so set the isAjaxInProgress = true; $scope.model.isAjaxInProgress = true; //Load all the messages $http({ url: '/api/smsresource', method: "GET" }). success(function (data, status, headers, config) { this callback will be called asynchronously //when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { //called asynchronously if an error occurs //or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); $scope.delete = function (id) { //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource/' + id, method: "DELETE" }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } $scope.sendMessage = function () { $scope.model.errorMessage = undefined; var message = ''; if($scope.model.message != undefined) message = $scope.model.message.trim(); if ($scope.model.phoneNumber == undefined || $scope.model.phoneNumber == '' || $scope.model.phoneNumber.length < 10 || $scope.model.phoneNumber[0] != '+') { $scope.model.errorMessage = "You must enter a valid phone number in international format. Eg: +44 7778 609466"; return; } if (message.length == 0) { $scope.model.errorMessage = "You must specify a message!"; return; } //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource', method: "POST", data: { to: $scope.model.phoneNumber, message: $scope.model.message } }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status // We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } }]); We can see from the previous listing how the functions that are called from the view are defined in the controller. It should also be evident how easy it is to make AJAX calls to consume our MVC4 REST WebAPI. Now we are left with the final piece. We need to define a route that associates a particular path with the view we have defined and the controller we have defined. 4. Add a new route that loads the controller and the partial This is the easiest part of the puzzle. We simply define another route in the /app/js/app.js file: $routeProvider.when('/sms', { templateUrl: '/app/partials/smsview.html', controller: 'SmsController' }); Conclusion In this article we have seen how much of the server side functionality in the MVC4 framework can be moved to the browser thus delivering a snappy and fast user interface. We have seen how we can build client side HTML only views that avoid the messy syntax offered by server side Razor views. We have built a functioning app from the ground up. The significant advantage of this approach to building web apps is that the front end can be completely platform independent. Even though we used ASP.NET to create our REST API, we could just easily have used any other language such as Node.js, Ruby etc without changing a single line of our front end code. Angular is a rich framework and we have only touched on basic functionality required to create a SPA. For readers who wish to delve further into the Angular framework, we would recommend the following URL as a starting point: http://docs.angularjs.org/misc/started. To get started with the code for this project: Sign up for an account at http://plus.sent.ly (free) Add your phone number Go to the “My Identies Page” Note Down your Sender ID, Consumer Key and Consumer Secret Download the code for this article at: https://docs.google.com/file/d/0BzjEWqSE31yoZjZlV0d0R2Y3eW8/edit?usp=sharing Change the values of Sender Id, Consumer Key and Consumer Secret in the web.config file Run the project through Visual Studio!

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >