Search Results

Search found 19033 results on 762 pages for 'blank screen'.

Page 633/762 | < Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >

  • Why is a fully transparent pixel still rendered?

    - by Mr Bell
    I am trying to make a pixel shader that achieves an effect similar to this video http://www.youtube.com/watch?v=f1uZvurrhig&feature=related My basic idea is render the scene to a temp render target then Render the previously rendered image with a slight fade on to another temp render target Draw the current scene on top of that Draw the results on to a render target that persists between draws Draw the results on to the screen But I am having problems with the fading portion. If I have my pixel shader return a color with its A component set to 0, shouldn't that basically amount to drawing nothing? (Assuming that sprite batch blend mode is set to AlphaBlend) To test this I have my pixel shader return a transparent red color. Instead of nothing being drawn, it draws a partially transparent red box. I hope that my question makes sense, but if it doesnt please ask me to clarify Here is the drawing code public override void Draw(GameTime gameTime) { GraphicsDevice.SamplerStates[1] = SamplerState.PointWrap; drawImageOnClearedRenderTarget(presentationTarget, tempRenderTarget, fadeEffect); drawImageOnRenderTarget(sceneRenderTarget, tempRenderTarget); drawImageOnClearedRenderTarget(tempRenderTarget, presentationTarget); GraphicsDevice.SetRenderTarget(null); drawImage(backgroundTexture); drawImage(presentationTarget); base.Draw(gameTime); } private void drawImage(Texture2D image, Effect effect = null) { spriteBatch.Begin(0, BlendState.AlphaBlend, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(image, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); } private void drawImageOnRenderTarget(Texture2D image, RenderTarget2D target, Effect effect = null) { GraphicsDevice.SetRenderTarget(target); drawImage(image, effect); } private void drawImageOnClearedRenderTarget(Texture2D image, RenderTarget2D target, Effect effect = null) { GraphicsDevice.SetRenderTarget(target); GraphicsDevice.Clear(Color.Transparent); drawImage(image, effect); } Here is the fade pixel shader sampler TextureSampler : register(s0); float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0 { float4 c = 0; c = tex2D(TextureSampler, texCoord); //c.a = clamp(c.a - 0.05, 0, 1); c.r = 1; c.g = 0; c.b = 0; c.a = 0; return c; } technique Fade { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } }

    Read the article

  • Distribution upgrade problem "No new release found"

    - by fefe
    I'm using Ubuntu 11.04. The update manager once found the new release 'oneiric', and still shows up this screen when I log on use ssh: Welcome to Ubuntu 11.04 (GNU/Linux 2.6.38-14-generic x86_64) * Documentation: https://help.ubuntu.com/ 0 packages can be updated. 0 updates are security updates. New release 'oneiric' available. Run 'do-release-upgrade' to upgrade to it. Last login: Wed Apr 25 16:22:48 2012 from *** But I didn't upgrade then, and change my apt sources. And now I cannot upgrade to 'oneiric'. do-relase-upgrade shows: $ sudo do-release-upgrade Checking for a new ubuntu release No new release found $ And apt-get dist-upgrade shows: $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. $ I can successfully update all my packages. File contents of source.list: $ cat /etc/apt/sources.list ## See sources.list(5) for more information, especialy # Remember that you can only use http, ftp or file URIs deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty main universe restricted multiverse deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty main universe restricted multiverse deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-security universe main multiverse restricted deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-security universe main multiverse restricted deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-updates universe main multiverse restricted deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-updates universe main multiverse restricted deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-backports universe main multiverse restricted deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-backports universe main multiverse restricted # deb http://ubuntu.dormforce.net/ubuntu/ lucid main universe restricted multiverse # deb-src http://ubuntu.dormforce.net/ubuntu/ lucid main universe restricted multiverse # deb http://ubuntu.dormforce.net/ubuntu/ lucid-security universe main multiverse restricted # deb-src http://ubuntu.dormforce.net/ubuntu/ lucid-security universe main multiverse restricted # deb http://ubuntu.dormforce.net/ubuntu/ lucid-updates universe main multiverse restricted # deb-src http://ubuntu.dormforce.net/ubuntu/ lucid-updates universe main multiverse restricted # CDROMs are managed through the apt-cdrom tool. # deb http://archive.canonical.com lucid partner # deb http://archive.canonical.com lucid-security partner # deb http://archive.canonical.com lucid-updates partner # deb-src http://archive.canonical.com lucid partner # deb-src http://archive.canonical.com lucid-security partner # deb-src http://archive.canonical.com lucid-updates partner #medibuntu repo # deb http://packages.medibuntu.org/ lucid free non-free # deb-src http://packages.medibuntu.org/ lucid free non-free # deb http://extras.ubuntu.com/ubuntu maverick main #Third party developers repository deb http://mirrors.sohu.com/ubuntu/ natty main restricted multiverse universe deb-src http://mirrors.sohu.com/ubuntu/ natty main universe restricted multiverse #Added by software-properties deb http://security.ubuntu.com/ubuntu/ natty-security universe main multiverse restricted deb-src http://mirrors.sohu.com/ubuntu/ natty-security universe main multiverse restricted deb http://mirrors.sohu.com/ubuntu/ natty-updates universe main multiverse restricted deb-src http://mirrors.sohu.com/ubuntu/ natty-updates universe main multiverse restricted File contents of /etc/update-manager/meta-release: $ cat /etc/update-manager/meta-release # default location for the meta-release file [METARELEASE] URI = http://changelogs.ubuntu.com/meta-release URI_LTS = http://changelogs.ubuntu.com/meta-release-lts URI_UNSTABLE_POSTFIX = -development URI_PROPOSED_POSTFIX = -proposed What may be the problem of this?

    Read the article

  • NYC Silverlight FireStarter - June 5th 2010 at the NYC Microsoft Office

    - by Sam Abraham
    On Saturday June 5th, 2010, I spent my Saturday morning at the NYC Silverlight FireStarter. Presenting was Peter Laudati from Microsoft and Jason Beres, Matt Van Horn and Todd Snyder from Infragistics. I watched the Simulcast for the morning sessions as I was tied up with some work, but ended up finally making it to the Microsoft Office and had the opportunity to attend the last hour of the event in person.   For me, the quality of the Simulcast was as good as in-person attendance so far as sound/video quality and the interaction with speakers. In the background was a screen with tweets from remote attendees asking questions or commenting on the presentations. Presenters did periodically stop to answer the tweeted questions as well as questions from attendees. Only thing I missed was getting my hands on some of that swag that was (literally) flying in the air at the event floor.   Upon my arrival at the Microsoft Office Location in NYC, I spoke with Rachel Appel and Peter Laudati asking for permission to take a few photos to record the outstanding effort that took place in putting this event together. Both agreed and I started with putting my photography skills to work.   You can always gauge the quality of an event with the number of its attendees who opt to stay till the last minute as well as the level of interaction of the audience with the speaker. With most of the FireStarter attendees remaining till the very end of the talk, and with the many questions that were asked, one can simply judge the event as a success as per my aforementioned criteria.   Evaluation forms were passed around and Peter strongly encouraged the audience to openly speak their mind as they record their comments. I didn't get to submit my evaluation as I was busy recording the event in photos, so here it goes: I believe that lots of hard work was put into making this event a reality. Quality of speakers, topics and level of Geekiness at the event was outstanding.  Overall, aside from a minor issue with Lunch delivery time, this event was of high quality and I am very sure everyone's evaluation will be in line with my analysis of it being a great success. Below are a few photos of the event.   --Sam Abraham Site Director - West Palm Beach .Net User Group www.Fladotnet.com     NYC Silverlight FireStarter Speakers - From Left to right: Peter Laudati, Todd Snyder, Matt Van Horn & Jason Beres   As jason wasn't quiet visible in the above photo, a closeup was taken (It was Jason's birthday and he had to leave a bit early, so the Infagisticts team thought outside the box...)     Full Room - That was at the last hour of the event   Another view of full room   Discussions during the break   End-of-event Raffle

    Read the article

  • Branching and Merging with TortoiseSVN

    - by capgpilk
    For this example I am using Visual Studio 2010, TortoiseSVN 1.6.6, Subversion 1.6.6 and AnkhSVN 2.1.7819.411, so if you are using different versions, some of these screen shots may differ. This is assuming you have your code checked in to the trunk directory and have a standard SVN structure of trunk, branches and tags. There are a number of developers who prefer to develop solely in a branch and never touch the trunk, but the process is generally the same and you may be on a small team and prefer to work in the trunk and branch occasionally. There are three steps to successful branching. First you branch, then when you are ready you need to reintegrate any changes that other developers may have made to the trunk in to your branch. Then finally when your branch and the trunk are in sync, you merge it back in to the trunk. Branch Right click project root in Windows Explorer > TortoiseSVN > Branch/Tag Enter the branch label in the ‘To URL’ box. For example /branches/1.1 Choose Head revision Check Switch working copy Click OK Make any changes to branch Make any changes to trunk Commit any changes For this example I copied the project to another location prior to branching and made changes to that using Notepad++. Then committed it to SVN, as this directory is mapped to the trunk, that is what gets updated.   Merge Trunk with Branch Right click project root in Windows Explorer > TortoiseSVN > Merge Choose ‘Merge a range of revisions’ In ‘URL to merge from’ choose your trunk Click Next, then the ‘test merge’ button. This will highlight any conflicts. Here we have one conflict we will need to resolve because we made a change and checked in to trunk earlier Click merge. Now we have the opportunity to edit that conflict This will open up TortoiseMerge which will allow us to resolve the issue. In this case I want both changes. Perform an Update then Commit Reloading in Visual Studio shows we have all changes that have been made to both trunk and branch. Merge Branch with Trunk Switch working copy by right clicking project root in Windows Explorer > TortoiseSVN > switch Switch to the trunk then ok Right click project root in Windows Explorer > TortoiseSVN > merge Choose ‘Reintegrate a branch’ In ‘From URL’ choose your branch then next Click ‘Test merge’, this shouldn’t show any conflicts Click Merge Perform Update then Commit Open project in Visual Studio, we now have all changes. So there we have it we are connected back to the trunk and have all the updates merged.

    Read the article

  • Bash arrays and case statements - review my script

    - by Felipe Alvarez
    #!/bin/bash # Change the environment in which you are currently working. # Actually, it calls the relevant 'lettus.sh' script if [ "${BASH_SOURCE[0]}" == "$0" ]; then echo "Try running this as \". chenv $1\"" exit 0 fi usage(){ echo "Usage: . ${PROG} -- Shows a list of user-selectable environments." echo " . ${PROG} [env] -- Select environment." echo " . ${PROG} -h -- Shows this usage screen." return } showEnv(){ # check if index0 exists, assume we have at least the first (zeroth) element #if [ -z "${envList}" ]; then if [ -z "${envList[0]}" ]; then echo "array \$envList is empty! " >&2 return 1 fi # Show all elements in array (0 -> n-1) for i in $(seq 0 $((${#envList[@]} - 1))); do echo ${envList[$i]} done return } setEnv(){ if [ -z "$1" ]; then usage; return fi case $1 in cold) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_cold.sh;; coles) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_coles.sh;; fc) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_fc.sh;; fcrm) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_fcrm.sh;; stable) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_stable.sh;; tip) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_tip.sh;; uat) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_uat.sh;; wellmdc) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_wellmdc.sh;; *) usage; return;; esac if $IS_SOURCED; then echo "Environment \"$1\" selected." echo "Now sourcing file \"$FILE_TO_SOURCE\"..." . ${FILE_TO_SOURCE} return else return 1 fi } main(){ if [ -z "$1" ]; then showEnv; return fi case $1 in -h) usage;; *) setEnv $1;; esac return } PROG="chenv" # create array of user-selectable environments envList=( cold coles fc fcrm stable tip uat wellmdc ) main "$@" return If I could, I'd like to get some feedback on a better way to accomplish any of the following: run through the case statement make script trivally simple to maintain/upgrade/update

    Read the article

  • Is commented out code really always bad?

    - by nikie
    Practically every text on code quality I've read agrees that commented out code is a bad thing. The usual example is that someone changed a line of code and left the old line there as a comment, apparently to confuse people who read the code later on. Of course, that's a bad thing. But I often find myself leaving commented out code in another situation: I write a computational-geometry or image processing algorithm. To understand this kind of code, and to find potential bugs in it, it's often very helpful to display intermediate results (e.g. draw a set of points to the screen or save a bitmap file). Looking at these values in the debugger usually means looking at a wall of numbers (coordinates, raw pixel values). Not very helpful. Writing a debugger visualizer every time would be overkill. I don't want to leave the visualization code in the final product (it hurts performance, and usually just confuses the end user), but I don't want to loose it, either. In C++, I can use #ifdef to conditionally compile that code, but I don't see much differnce between this: /* // Debug Visualization: draw set of found interest points for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); */ and this: #ifdef DEBUG_VISUALIZATION_DRAW_INTEREST_POINTS for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); #endif So, most of the time, I just leave the visualization code commented out, with a comment saying what is being visualized. When I read the code a year later, I'm usually happy I can just uncomment the visualization code and literally "see what's going on". Should I feel bad about that? Why? Is there a superior solution? Update: S. Lott asks in a comment Are you somehow "over-generalizing" all commented code to include debugging as well as senseless, obsolete code? Why are you making that overly-generalized conclusion? I recently read Robert Glass' "Clean Code", which says: Few practices are as odious as commenting-out code. Don't do this!. I've looked at the paragraph in the book again (p. 68), there's no qualification, no distinction made between different reasons for commenting out code. So I wondered if this rule is over-generalizing (or if I misunderstood the book) or if what I do is bad practice, for some reason I didn't know.

    Read the article

  • Bluetooth Dial-Up Networking using Blueman

    - by leemes
    I want to configure a dial up network connection via bluetooth to my phone in order to access the internet. I use Lubuntu 12.04 (Ubuntu with LXDE) which has the Network Manager Applet and Blueman applet installed. I guess these are the same tools than on an Ubuntu installation, hence I ask my question on this site. My phone is a Sony Ericsson W810i, my laptop is a Lenovo S10-2, my mobile phone provider is o2 Germany. I scanned for my mobile phone using the Blueman applet. I connected the dial-up network via the context menu - Serial Ports - Dial-up Networking. A notification bubble says that the connection is available on the interface named ppp0. ipconfig is telling something different: There is no ppp0 or something similar. I only see my eth0 (wired ethernet), eth1 (wifi) and lo interfaces. Of course, I can't ping google.com as the interface really seems to be not present at all. When the dial-up network is being connected, my mobile phone says that it connects to the internet. Afterwards, I see the active connection on the phone's screen. When successfully connecting with the phone using another computer, it behaves exactly the same, so I guess that the phone isn't the problem. I don't know if I configured the Dial-Up correctly. I use the phone number *99# which is very common on most mobile ISPs. I use the APN which my ISP is telling me to use. (I can't find the number on their support page, so I just use the default value *99#.) My mobile ISP is o2 Germany. There are How-Tos out there which use the Network Manager Applet to setup a bluetooth dial-up connection, but I can't see any bluetooth devices in the context menu as on the screenshots in those How-Tos. Do you have any suggestions what might be wrong / what I should try? EDIT: When choosing "Network Access Point" in the device's context menu instead of Serial Ports - Dial-Up Networking, an interface bnep0 appears. However, neither an IPv4 address is assigned for that interface (but IPv6), nor the phone connects to the internet. Am I missing something? Can I connect to the internet after setting up this network connection?

    Read the article

  • Coding different states in Adventure Games

    - by Cardin
    I'm planning out an adventure game, and can't figure out what's the right way to implement the behaviour of a level depending on state of story progression. My single-player game features a huge world where the player has to interact with people in a town at various points in the game. However, depending on story progression, different things would be presented to the player, for e.g. the Guild Leader will change locations from the town square to various locations around the city; Doors would only unlock at certain times of the day after finishing a particular routine; Different cut-screen/trigger events happen only after a particular milestone has been reached. I naively thought of using a switch{} statement initially to decide what the NPC should say or which he could be found at, and making quest objectives interact-able only after checking a global game_state variable's condition. But I realised I would quickly run into a lot of different game states and switch-cases in order to change the behaviour of an object. That switch statement would also be massively hard to debug, and I guess it might also be hard to use in a level editor. So I thought, instead of having a single object with multiple states, maybe I should have multiple instances of the same object, with a single state. That way, if I use something like a level editor, I can put an instance of the NPC at all the different locations he could ever appear at, and also an instance for each conversation state he has. But that means there'll be a lot of inactive, invisible game objects floating around the level, which might be trouble for memory, or simply hard to see in a level editor, i don't know. Or simply, make an identical, but separate level for each game state. This feels the cleanest and bug-free way to do things, but it feels like massive manual work making sure each version of the level is really identical to each other. All my methods feel so inefficient, so to recap my question, is there a better or standardised way to implement behaviour of a level depending on state of story progression? PS: I don't have a level editor yet - thinking of using something like JME SDK or making my own.

    Read the article

  • Which techniques to study?

    - by Djentleman
    Just to give you some background info, I'm studying a programming major at a tertiary level and am in my third year, so I'm not a newbie off the street. However, I am still quite new to game programming as a subset of programming. One of my personal projects for next semester is to design and create a 2D platformer game with emphasis on procedural generation and "neato" effects (think metroidvania). I've written up a list of some techniques to help me improve my personal skills (using XNA for the time being). The list is as follows: QuadTrees: Build a basic program in XNA that moves basic 2D sprites (circles and squares) around a set path and speed and changes their colour when they collide. Add functionality to add and delete objects of different sizes (select a direction and speed when adding and just drag and drop them in). Particles: Build a basic program in XNA in which you can select different colours and create particle effects of those colours on screen by clicking and dragging the mouse around (simple particles emerging from where the mouse is clicked). Add functionality where you can change the amount of particles to be drawn and the speed at which they travel and when they expire. Possibly implement gravity and wind after part 3 is complete. Physics: Build a basic program in XNA where you have a ball in a set 2D environment, a wind slider, and a gravity slider (can go to negative for reverse gravity). You can click to drag the ball around and release to throw it and, depending on what you do, the ball interacts with the environment. Implement other shapes afterwards. Random 2D terrain generation: Build a basic program in XNA that randomly generates terrain (including hills, caves, etc) created from 2D tiles. Add functionality that draws the tiles from a tileset and places different tiles depending on where they lie on the y-axis (dirt on top, then rock, then lava, etc). Randomised objects: Build a basic program in XNA that, when a button is clicked, displays a randomised item sprite based on parameters (type, colour, etc) with the images pulled from tilesets. Add the ability to save the item as an object, which stores it in a side-pane where it can be selected for viewing. Movement: Build a basic program in XNA where you can move an object around in an environment (tile-based) with a camera that pans with it. No gravity. Implement gravity and wind, allow the character to jump and fall with some basic platforms. So my question is this: Are there any other commonly used techniques that I should research, and can I get some suggestions as to the effectiveness of the techniques I've chosen to work on (e.g., don't do QuadTree stuff because [insert reason here], or, do [insert technique here] before you start working on particles because [insert reason here])? I hope this is clear enough and please let me know if I can further clarify anything!

    Read the article

  • OpenGL - have object follow mouse

    - by kevin james
    I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is: Get the coordinates within the window with glfwGetCursorPos. The window was created with window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL); and the code to get coordinates is double xpos, ypos; glfwGetCursorPos(window, &xpos, &ypos); Next, I use GLM unproject, to get the coordinates in "object space" glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f); glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f); glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport); There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit: The (incorrectly) unprojected x-values range from about -0.552 to 0.552, and the y-values from about -0.411 to 0.411.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • Set a Video as Your Desktop Wallpaper with VLC

    - by DigitalGeekery
    Are you tired of static desktop wallpapers and want something a bit more entertaining? Today we’ll take a look at setting a video as wallpaper in VLC media player. Download and install VLC player. You’ll find the download link below. Open VLC and select Tools > Preferences. On the Preferences windows, select the Video button on the left. Under Video Settings, select DirectX video output from the Output dropdown list. Click Save before exiting and then restart VLC. Next, select a video and begin playing it with VLC. Right-click on the screen, select Video, then DirectX Wallpaper.   You can achieve the same result by selecting Video from the Menu and clicking DirectX Wallpaper.   If you’re using Windows Aero Themes, you may get the warning message below and your theme will switch automatically to a basic theme.   After the Wallpaper is enabled, minimize VLC player and enjoy the show as you work.     When you are ready to switch back to your normal wallpaper, click Video, and then close out of VLC.   Occasionally we had to manually change our wallpaper back to normal. You can do that by right clicking on the desktop and selecting your theme.   Conclusion This might not make the most productive desktop environment, but it is pretty cool. It’s definitely not the same old boring wallpaper! Download VLC Similar Articles Productive Geek Tips Dual Monitors: Use a Different Wallpaper on Each Desktop in Windows 7, Vista or XPDual Monitors: Use a Different Wallpaper on Each DesktopDesktop Fun: Video Game Icon PacksDesktop Fun: Starship Theme WallpapersDesktop Fun: Mountains Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app

    Read the article

  • What is the start point in game development? Where to start?

    - by Dragon
    I understand, I'm not unique with such a question, there are a lot of questions like this one. But I hope you'll take a minute and maybe can give me a piece of advice. I have an idea to develop games, but I don't know where is the start point in game development. The learning curve isn't as straight as in learning of a programming language, but I want to give it a try. I have some experience with OOP and programming in general. I know (not too deep) C#, Java programming languages. I searched info on where to start, read a lot of blogs, forums and so on. Once I decided "stop wandering around, just start develop a game" and I started. At the moment I have a console version of very simple game (RPS - rock-paper-scissors) developed with C#. It has different modes: "player vs cpu" and "player vs player". Some time later I looked at the code and decided that it should be refactored or even redeveloped from the scratch. And I thought that time "GUI is what I need. I can add logic later." And now I'm here. I've already decided to make RPS with GUI, then make multiplayer and so on. I'm not thinking about 3D now, 2D is enough. It doesn't matter what language to use: C# or Java, I found frameworks for both - XNA (C#) and Slick (Java). Both are good for 2D game development. But I know nothing about sprites, how to bind objects on the screen and so on. You can say "you don't need it for such simple game like RPS", but RPS is the beginning, I have some ideas like "Tower Defense" game... you know, everybody has ideas, wishes.... and this knowledge is useful and in some way obligatory. So what is the start point to achieve my plans, ideas, wishes? Where to start? Is it possible to make game development learning curve a little bit straight? Or there're ways that amateur and game development beginners use for years? Thank you for you answers and advise in advance. P.S Sorry for that this post turned out an essay, but I tried to express my wish to start acting. Hope I managed to do it.

    Read the article

  • iOS - pass UIImage to shader as texture

    - by martin pilch
    I am trying to pass UIImage to GLSL shader. The fragment shader is: varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture; uniform sampler2D inputImageTexture2; void main() { highp vec4 color = texture2D(inputImageTexture, textureCoordinate); highp vec4 color2 = texture2D(inputImageTexture2, textureCoordinate); gl_FragColor = color * color2; } What I want to do is send images from camera and do multiply blend with texture. When I just send data from camera, everything is fine. So problem should be with sending another texture to shader. I am doing it this way: - (void)setTexture:(UIImage*)image forUniform:(NSString*)uniform { CGSize sizeOfImage = [image size]; CGFloat scaleOfImage = [image scale]; CGSize pixelSizeOfImage = CGSizeMake(scaleOfImage * sizeOfImage.width, scaleOfImage * sizeOfImage.height); //create context GLubyte * spriteData = (GLubyte *)malloc(pixelSizeOfImage.width * pixelSizeOfImage.height * 4 * sizeof(GLubyte)); CGContextRef spriteContext = CGBitmapContextCreate(spriteData, pixelSizeOfImage.width, pixelSizeOfImage.height, 8, pixelSizeOfImage.width * 4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast); //draw image into context CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, pixelSizeOfImage.width, pixelSizeOfImage.height), image.CGImage); //get uniform of texture GLuint uniformIndex = glGetUniformLocation(__programPointer, [uniform UTF8String]); //generate texture GLuint textureIndex; glGenTextures(1, &textureIndex); glBindTexture(GL_TEXTURE_2D, textureIndex); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); //create texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pixelSizeOfImage.width, pixelSizeOfImage.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textureIndex); //"send" to shader glUniform1i(uniformIndex, 1); free(spriteData); CGContextRelease(spriteContext); } Uniform for texture is fine, glGetUniformLocation function do not returns -1. The texture is PNG file of resolution 2000x2000 pixels. PROBLEM: When the texture is passed to shader, I have got "black screen". Maybe problem are parameters of the CGContext or parameters of the function glTexImage2D Thank you

    Read the article

  • How to Format a USB Drive in Ubuntu Using GParted

    - by Trevor Bekolay
    If a USB hard drive or flash drive is not properly formatted, then it will not show up in the Ubuntu Places menu, making it hard to interact with. We’ll show you how to format a USB drive using the tool GParted. Note: Formatting a USB drive will destroy any data currently stored on it. If you think that your USB drive is already properly formatted, but Ubuntu just isn’t picking it up, try unplugging it and plugging it back in to a different USB slot, or restarting your machine with the device plugged in on start-up. Open a terminal by clicking on Applications in the top-left of the screen, then Accessories > Terminal. GParted should be installed by default, but we’ll make sure it’s installed by entering the following command in the terminal: sudo apt-get install gparted To open GParted, enter the following command in the terminal: sudo gparted Find your USB drive in the drop-down box at the top right of the GParted window. The drive should be unallocated – if it has a valid partition on it, then you may be looking at the wrong drive. Note: Make sure you’re on the correct drive, as making changes on the wrong hard drive with GParted can delete all data on a hard drive! Assuming you’re on the right drive, right-click on the unallocated grey block and click New. In the window that pops up, change the File System to fat32 for USB Flash Drives, NTFS for USB Hard Drives that will be used in Windows, or ext3/ext4 for USB Hard Drives that will be used exclusively in Linux. Add a label if you’d like, and then click Add. Click the green checkmark and then the Apply button to apply the changes. GParted will now format your drive. If you’re formatting a large USB Hard Drive, this can take some time. Once the process is done, you can close GParted, and the drive will now show up in the Places menu. Clicking on the drive will mount it and open it in a File Browser window. It will also add a shortcut to the drive on the Desktop by default. Your USB drive is now ready to store your files! Similar Articles Productive Geek Tips Using GParted to Resize Your Windows Vista PartitionInstall an RPM Package on Ubuntu LinuxCreate a Persistent Bootable Ubuntu USB Flash DriveShare Ubuntu Home Directories using SambaCreate a Samba User on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • The Oracle Cash Management Secret Very Few Customers Know About

    - by Theresa Hickman
    Did you know that Oracle Cash Management has a robust positioning feature? I had no idea. I was under the mistaken impression that Oracle Cash Management only did bank statement reconciliations. It seems I am not alone. In fact, many Oracle Financials customers are also not aware of this even though it is delivered for free with the Oracle Financials license. Even better, last week, Oracle released an enhancement to Oracle Cash Management for Release 12 that will greatly help customers with their cash positioning needs. As we all know, credit is tight these days. Companies need better visibility of their cash and other liquidity positions to make better use of their cash resources. Today, many customers are managing their cash positions manually using spreadsheets. We also hear how many of them are maintaining larger than normal balances in numerous bank accounts because they just do not have the visibility, and therefore the comfort they need. Although spreadsheets may work in the short-term, they are not the best way to manage your cash positions for the long-term especially if you have dozens, or even hundreds of bank and brokerage accounts. Also, spreadsheets are a lot more risky because they can be overwritten, deleted, difficult to audit, etc. With the newly enhanced positioning feature in Oracle Cash Management, customers can manage their daily cash positions using an excel-like interface that is very flexible and user-configurable. You can link the worksheet to an unlimited number of bank accounts to automatically retrieve your opening balances, the current/intra-day cash inflows and outflows, as well as your expected cash flows from your Fx, Investment and Debt positions if you have Oracle's Treasury module . Oracle Cash Management also has direct integration with Oracle Receivables, Oracle Payables, and Payroll, which adds to the comprehensive picture of what's happening with your organizations' cash in real-time. Here's a screen shot of what the cash positioning page looks like: View image As you can see, your Treasurers can obtain a holistic view of all cash positions across any number of bank accounts as well as other sources of cash flow movements. Depending on how they manage their accounts, they can also use this feature to initiate or monitor bank account sweeps or transfers between their zero balance accounts (ZBA) or cash pools. The cash position worksheet provide drill down for more detail and the ability to manually enter items directly into the worksheet for even greater flexibility and control. The enhancements to this feature were released last week. The following list the patches for Release 12.0.6 and 12.1.1: For more information, visit the following website. http://launch.oracle.com. PIN: yes2try

    Read the article

  • JWT Token Security with Fusion Sales Cloud

    - by asantaga
    When integrating SalesCloud with a 3rd party application you often need to pass the users identity to the 3rd party application so that  The 3rd party application knows who the user is The 3rd party application needs to be able to do WebService callbacks to Sales Cloud as that user.  Until recently without using SAML, this wasn't easily possible and one workaround was to pass the username, potentially even the password, from Sales Cloud to the 3rd party application using URL parameters.. With Oracle Fusion R8 we now have a proper solution and that is called "JWT Token support". This is based on the industry JSON Web Token standard , for more information see here JWT Works by allowing the user the ability to generate a token (lasts a short period of time) for a specific application. This token is then passed to the 3rd party application as a GET parameter.  The 3rd party application can then call into SalesCloud and use this token for all webservice calls, the calls will be executed as the user who generated the token in the first place, or they can call a special HR WebService (UserService-findSelfUserDetails() ) with the token and Fusion will respond with the users details. Some more details  The following will go through the scenario that you want to embed a 3rd party application within a WebContent frame (iFrame) within the opportunity screen.  1. Define your application using the topology manager in setup and maintenance  See this documentation link on topology manager 2. From within your groovy script which defines the iFrame you wish to embed, write some code which looks like this : def thirdpartyapplicationurl = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("My3rdPartyApplication" )def crmkey= (new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken())def url = thirdpartyapplicationurl +"param1="+OptyId+"&jwt ="+crmkeyreturn (url)  This snippet generates a URL which contains The Hostname/endpoint of the 3rd party application Two Parameters The opportunityId stored in parameter "param1" The JWT Token store in  parameter "jwt" 3. From your 3rd Party Application you now have two options Execute a webservice call by first setting the header parameter "Authentication" to the JWT token. The webservice call will be executed against Fusion Applications "As" the user who execute the process To find out "Who you are" , set the header parameter to "Authentication" and execute the special webservice call findSelfUserDetails(), in the UserDetailsService For more information  Oracle Sales Cloud Documentation , specific chapter on JWT Token OTN samples, specifically the Rich UI With JWT Token Sample Oracle Fusion Applications General Documentation

    Read the article

  • Keep taking the tablets

    - by Roger Hart
    A guest editorial for the SimpleTalk newsletter. So why would Red Gate build an Ipad Game? Is it just because tablet devices are exciting and cool? Ok, maybe a little. Mostly, it was seeing that the best existing tablet and smartphone apps do simple, intuitive things, using simple intuitive interfaces to solve single problems. That's pretty close to what we call our own "intuitively simple" approach to software. Tablets and mobile could be fantastic for us, if we can identify those problems that a tablet device can solve. How do you create THE next tool for a completely new technology? We're glad we don't face that problem every day, but it's pretty exciting when we do. We figure we should learn by doing. We created "MobileFoo" (a Red Gate Company) , we picked up some shiny Apple tech, and got to grips with Objective C, and life in the App Store ecosystem. The result so far is an iPad game: Stacks and Heaps It's Rob and Marine's spin on Snakes and Ladders. Instead of snakes we have unhandled exceptions, a blue screen of death, and other hazards. We wanted something compellingly geeky on mobile, and we're pretty sure we've got it. It's trudging through App Store approval as we speak. but if you want to get an idea of what it is like to switch from .net to Objective C, take a look at Rob's post Android and iOS is quite a culture-change for Windows developers. So to give them a feel for the problems real users might have, we needed some real users - we offered our colleagues subsidised tablets. The only conditions were that they get used at work, and we get the feedback. Seeing tablets around the office is starting to give us some data points: Is typing the bottleneck? Will tablets ever cut it as text-entry devices, and could we fix it? Is mobile working held up by the pain of connecting to work LANs? How about security? Multi-tasking will let tablets do more. They're small, easy to use, almost instant to switch on, and connect by Wi Fi. There's plenty on that list to make a sysadmin twitchy. We'll find out as people spend more time working with these devices, and we'd love to hear what you think about tablet devices too. (comments are filtered, what with the spam)

    Read the article

  • Transparent Technology from Amazon

    - by David Dorf
    Amazon has been making some interesting moves again, this time in the augmented humanity area.  Augmented humanity is about helping humans overcome their shortcomings using technology.  Putting a powerful smartphone in your pocket helps you in many ways like navigating streets, communicating with far off friends, and accessing information.  But the interface for smartphones is somewhat limiting and unnatural, so companies have been looking for ways to make the technology more transparent and therefore easier to use. When Apple helped us drop the stylus, we took a giant leap forward in simplicity.  Using touchscreens with intuitive gestures was part of the iPhone's original appeal.  People don't want to know that technology is there -- they just want the benefits.  So what's the next leap beyond the touchscreen to make smartphones even easier to use? Two natural ways we interact with the world around us is by using sight and voice.  Google and Apple have been using both in their mobile platforms for limited uses cases.  Nobody actually wants to type a text message, so why not just speak it?  Any if you want more information about a book, why not just snap a picture of the cover?  That's much more accurate than trying to key the title and/or author. So what's Amazon been doing?  First, Amazon released a new iPhone app called Flow that allows iPhone users to see information about products in context.  Yes, its an augmented reality app that uses the phone's camera to view products, and overlays data about the products on the screen.  For the most part it requires the barcode to be visible to correctly identify the product, but I believe it can also recognize certain logos as well.  Download the app and try it out but don't expect perfection.  Its good enough to demonstrate the concept, but its far from accurate enough.  (MobileBeat did a pretty good review.)  Extrapolate to the future and we might just have a heads-up display in our eyeglasses. The second interesting area is voice response, for which Siri is getting lots of attention.  Amazon may have purchased a voice recognition company called Yap, although the deal is not confirmed.  But it would make perfect sense, especially with the Kindle Fire in Amazon's lineup. I believe over the next 3-5 years the way in which we interact with smartphones will mature, and they will become more transparent yet more important to our daily lives.  This will, of course, impact the way we shop, making information more readily accessible than it already is.  Amazon seems to be positioning itself to be at the forefront of this trend, so we should be watching them carefully.

    Read the article

  • SQLAuthority News – 5th Anniversary Giveaways

    - by pinaldave
    Please read my 5th Anniversary post and my quick note on history of the Database. I am sure that we all have friends and we value friendship more than anything. In fact, the complete model of Facebook is built on friends. If you have lots of friends, you must be a lucky person. Having a lot of friends is indeed a good thing. I consider all you blog readers as my friends so now I want do something for you. What is it? Well, send me details about how many of your friends like my page and you would have a chance to win lots of learning materials for yourself and your friends. Here are the exciting prizes awaiting the lucky winner: Combo set of 5 Joes 2 Pros Book – 1 for YOU and 1 for Friend This is USD 444 (each set USD 222) worth gift. It contains all the five Joes 2 Pros books (Vol1, Vol2, Vol3, Vol4, Vol5) + 1 Learning DVD. [Amazon] | [Flipkart] If in case you submitted an entry but didn’t win the Combo set of 5 Joes 2 Pros books, you could still will  my SQL Server Wait Stats book as a consolation prize! I will pick the next 5 participants who have the highest number of friends who “liked” the Facebook page, http://facebook.com/SQLAuth. Instead of sending one copy, I will send you 2 copies so you can share one copy with a friend of yours. Well, it is important to share our learning and love with friends, isn’t it? Note: Just take a screenshot of http://facebook.com/SQLAuth using Print Screen function and send it by Nov 7th to pinal ‘at’ sqlauthority.com.. There are no special freebies to early birds so take your time and see if you can increase your friends like count by Nov 7th. Guess – What is in it? It is quite possible you are not a Facebook or Twitter user. In that case you can still win a surprise from me. You have 2 days to guess what is in this box. If you guess it correct and you are one of the first 5 persons to have the correct answer – you will get what is in this box for free. Please note that you have only 48 hours to guess. Please give me your guess by commenting to this blog post. Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Milestone, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Customizing Flowcharts in Oracle Tutor

    - by [email protected]
    Today we're going to look at how you can customize the flowcharts within Oracle Tutor procedures, and how you can share those changes with other authors within your company. Here is an image of a flowchart within a Tutor procedure with the default size and color scheme. You may want to change the size of your flowcharts as your end-users might have larger screens or need larger fonts. To change the size and number of columns, navigate to Tutor Author Author Options Flowcharts. The default is to have 4 columns appear in each flowchart, but, if I change it to six, my end-users will see a denser flowchart. This might be too dense for my end-users, so I will change it to 5 columns, and I will also deselect the option to have separate task boxes. Now let's look at how to customize the colors. Within the Flowchart options dialog, there is a button labeled "Colors." This brings up a dialog box of every object on a Tutor flowchart, and I can modify the color of each object, as well as the text within the object. If I click on the background, the "page" object appears in the Item field, and now I can customize the color and the title text by selecting Select Fill Color and/or Select Text Color. A dialog box with color choices appears. If I select Define Custom Colors, I can make my selections even more precise. Each time I change the color of an object, it appears in the selection screen. When the flowchart customization is finished, I can save my changes by naming the scheme. Although the color scheme I have chosen is rather silly looking, perhaps I want others to give me their feedback and make changes as they wish. I can share the color scheme with them by copying the FCP.INI file in the Tutor\Author directory into the same directory on their systems. If the other users have color schemes that they do not want to lose, they can copy the relevant lines from the FCP.INI file into their file. If I flowchart my document with the new scheme, I can see how it looks within the document. Sometimes just one or two changes to the default scheme are enough to customize the flowchart to your company's color palette. I have seen customers who have only changed the Start object to green and the End object to red, and I've seen another customer who changed every object to some variant of black and orange. Experiment! And let us know how you have customized your flowcharts. Mary R. Keane Senior Development Director, Oracle Tutor

    Read the article

  • How to Use Your Android Phone as a Modem; No Rooting Required

    - by Jason Fitzpatrick
    If your cellular provider’s mobile hotspot/tethering plans are too pricey, skip them and tether your phone to your computer without inflating your monthly bill. Read on to see how you can score free mobile internet. We recently received a letter from a How-To Geek reader, requesting help linking their Android phone to their laptop to avoid the highway robbery their cellular provider was insisting upon: Dear How-To Geek, I recently found out that my cellphone company charges $30 a month to use your smartphone as a data modem. That’s an outrageous price when I already pay an extra $15 a month charge just because they insist that because I have a smartphone I need a data plan because I’ll be using so much more data than other users. They expect me to pay what amounts to a $45 a month premium just to do some occasional surfing and email checking from the comfort of my laptop instead of the much smaller smartphone screen! Surely there is a work around? I’m running Windows 7 on my laptop and I have an Android phone running Android OS 2.2. Help! Sincerely, No Double Dipping! Well Double Dipping, this is a sentiment we can strongly related to as many of us on staff are in a similar situation. It’s absurd that so many companies charge you to use the data connection on the phone you’re already paying for. There is no difference in bandwidth usage if you stream Pandora to your phone or to your laptop, for example. Fortunately we have a solution for you. It’s not free-as-in-beer but it only costs $16 which, over the first year of use alone, will save you $344. Let’s get started! Latest Features How-To Geek ETC What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices Feel the Chill of the South Atlantic with the Antarctica Theme for Windows 7 Seas0nPass Now Offers Untethered Apple TV Jailbreaking

    Read the article

  • Best Ati Radeon x1200 drivers for 12.10

    - by Jaclyn
    [Long story short is at the bottom if you don't care about my ranting] Ok, well, I have the unfortunate distinction of having an Acer Extensa 4420 (Yes, the model with the faulty motherboard, and no, I do not know how it is still working either). Long story short I need the best drivers for my Ati Radeon x1200 integrated graphics card. The Ati Propriatary drivers for 12.10 no longer supports my video card. Currently when I try to play Minecraft, or any game really, the framerate is quite terrible despite the fact that my handy dandy system load monitor says that I have plenty of memory and CPU power (Mind you, this is when I'm not playing minecraft; that kind of uses up all of my resources unless it's on the worst graphical settings, and even then I have terrible framerate, but plenty of resources left over). I tried the fglrx through a workaround guide, and it completely killed my display and I had to uninstall it. I'm considering just trying to install fglrx through synaptic, but I am hesitant to do so since I don't want a repeat of the BSBDD!!! (Black Screen of Bad Drivers 'o DOOOM!!!), so I will wait until after I get some input from you fine ladies and gentlemen on to what your advice it. ok, so I'm running xubuntu 12.10 64 bit. I upgraded from 10.10 to 11.04. Then about a month ago I went from natty to quantal via the update option in the package updater, and then I decided that I was sick of gnome so I installed xfce. I did not install new drivers from natty to when I wound up at 12.10, so that shouldn't have changed, and they were indeed quite terrible back then, but now I'm using xubuntu as my main os, so I actually need good drivers. uuuugh. Anyway, long story short: I need to know what are the best drivers available for my card, and how to install them, because I am a linux novice, and I have tried everything that I can think of, including search google. Small edit: I forgot to note that since I upgraded to 12.10, my VGA out does not work (I haven't had a chance to try the s-video yet), and possibly related, the USB port on that side of my laptop does not work anymore either.

    Read the article

  • MythTV lost recordings - "No recordings available" and no recording rules either

    - by nimasmi
    I have a c.6 year old mythtv database. I recently upgraded from Ubuntu 10.04 to 12.04. This brought a MythTV upgrade from 0.24 to 0.25, which went well. Today, all my recordings have disappeared. They still exist in the /var/lib/mythtv/recordings folder, and the 'M' key in the Watch Recordings page says that there are 201 recordings available somewhere, but they will not display. See screenshot: (implicit thanks to whomever upvoted this, giving me sufficient reputation to upload images) Changing the filter does not remedy the fact that there is nothing shown in the lists. My Upcoming Recordings screen says that there are no rules set, but my list of previously recorded shows is still there, and has an entry from as recently as 3am today. mythbackend --printsched gives the following: user@box:~$ mythbackend --printsched 2012-09-22 12:59:20.537008 C mythbackend version: fixes/0.25 [v0.25.2-15-g46cab93] www.mythtv.org 2012-09-22 12:59:20.537043 C Qt version: compile: 4.8.1, runtime: 4.8.1 2012-09-22 12:59:20.537048 N Enabled verbose msgs: general 2012-09-22 12:59:20.537076 N Setting Log Level to LOG_INFO 2012-09-22 12:59:20.537142 I Added logging to the console 2012-09-22 12:59:20.537152 I Added database logging to table logging 2012-09-22 12:59:20.537279 N Setting up SIGHUP handler 2012-09-22 12:59:20.537373 N Using runtime prefix = /usr 2012-09-22 12:59:20.537394 N Using configuration directory = /home/user/.mythtv 2012-09-22 12:59:20.537999 I Assumed character encoding: en_GB.UTF-8 2012-09-22 12:59:20.538599 N Empty LocalHostName. 2012-09-22 12:59:20.538610 I Using localhost value of box 2012-09-22 12:59:20.538792 I Testing network connectivity to '192.168.1.2' 2012-09-22 12:59:20.539420 I Starting process manager 2012-09-22 12:59:20.541412 I Starting IO manager (read) 2012-09-22 12:59:20.541715 I Starting IO manager (write) 2012-09-22 12:59:20.541836 I Starting process signal handler 2012-09-22 12:59:20.684497 N Setting QT default locale to EN_GB 2012-09-22 12:59:20.684694 I Current locale EN_GB 2012-09-22 12:59:20.684813 N Reading locale defaults from /usr/share/mythtv//locales/en_gb.xml 2012-09-22 12:59:20.697623 I New static DB connectionDataDirectCon 2012-09-22 12:59:20.704769 I MythCoreContext: Connecting to backend server: 192.168.1.2:6543 (try 1 of 1) Calculating Schedule from database. Inputs, Card IDs, and Conflict info may be invalid if you have multiple tuners. 2012-09-22 12:59:27.710538 E MythSocket(21dfcd0:14): readStringList: Error, timed out after 7000 ms. 2012-09-22 12:59:27.710592 C Protocol version check failure. The response to MYTH_PROTO_VERSION was empty. This happens when the backend is too busy to respond, or has deadlocked in due to bugs or hardware failure. Things I have tried so far: restart the backend restart the frontend run mythtv-setup and check database passwords and IP addresses change the frontend setting for backend IP from localhost to 192.168.1.2 (the backend/frontend's IP) run optimize_mythdb.pl Other suggestions appreciated.

    Read the article

  • Adding a Role to a Responsibility for Use with the Oracle E-Business Suite SDK for Java JAAS Implementation

    - by Juan Camilo Ruiz
    This new post on the series of ADF integration with Oracle E-Business Suite, was written by Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team. Based on a previous post of the series, a reader asked what to do if you have an existing responsibility assigned to lots of users, instead of the UMX role that the Oracle E-Business Suite SDK for Java JAAS Implementation requires.  It would be tedious to assign a new role directly to hundreds or thousands of users, so naturally we’d like to avoid that if possible. Most people don’t know this, but it’s possible to assign a UMX role to a responsibility in Oracle User Management. Once you do that, users with your responsibility will all inherit your UMX role automatically. You can then proceed with using your UMX role with JAAS for ADF. Here is how to assign a UMX role to a responsibility in Oracle E-Business Suite: In the User Management responsibility, go to the Roles & Role Inheritance page. Search for the responsibility you want. In the search results table, click the “View In Hierarchy” icon for your responsibility. Note that the codes for responsibilities start with FND_RESP, while the codes for roles start with UMX. In the Role Inheritance Hierarchy, click on the Add Node icon (green plus + ) for your responsibility. Now you will see what appears to be the same page again but it is a little different (note the text at the top telling you the role you select will be inherited…).  This time, either search or expand nodes until you find your custom UMX role.  Use the Quick Select to choose that role. You will be sent back to the first screen, where you should see a confirmation message at the top. On the same page you can verify that the custom UMX role is underneath the responsibility.  You may need to expand one or more nodes to see the UMX role under the responsibility. You might see some other roles that have been inherited as well. Now that your users have the UMX role, you can test that the UMX role is being passed through to your ADF application through the Oracle E-Business Suite SDK for Java JAAS feature. Happy coding!

    Read the article

< Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >